00:00:00.001 Started by upstream project "autotest-nightly" build number 4148 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3510 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.028 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.029 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.032 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.052 Fetching changes from the remote Git repository 00:00:00.055 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.082 Using shallow fetch with depth 1 00:00:00.082 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.082 > git --version # timeout=10 00:00:00.137 > git --version # 'git version 2.39.2' 00:00:00.137 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.201 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.201 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.294 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.305 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.317 Checking out Revision 1913354106d3abc3c9aeb027a32277f58731b4dc (FETCH_HEAD) 00:00:03.317 > git config core.sparsecheckout # timeout=10 00:00:03.329 > git read-tree -mu HEAD # timeout=10 00:00:03.345 > git checkout -f 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=5 00:00:03.366 Commit message: "jenkins: update jenkins to 2.462.2 and update plugins to its latest versions" 00:00:03.366 > git rev-list --no-walk 1913354106d3abc3c9aeb027a32277f58731b4dc # timeout=10 00:00:03.570 [Pipeline] Start of Pipeline 00:00:03.589 [Pipeline] library 00:00:03.591 Loading library shm_lib@master 00:00:03.591 Library shm_lib@master is cached. Copying from home. 00:00:03.606 [Pipeline] node 00:00:03.628 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.630 [Pipeline] { 00:00:03.639 [Pipeline] catchError 00:00:03.641 [Pipeline] { 00:00:03.654 [Pipeline] wrap 00:00:03.665 [Pipeline] { 00:00:03.674 [Pipeline] stage 00:00:03.675 [Pipeline] { (Prologue) 00:00:03.694 [Pipeline] echo 00:00:03.695 Node: VM-host-WFP7 00:00:03.703 [Pipeline] cleanWs 00:00:03.713 [WS-CLEANUP] Deleting project workspace... 00:00:03.713 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.718 [WS-CLEANUP] done 00:00:03.952 [Pipeline] setCustomBuildProperty 00:00:04.065 [Pipeline] httpRequest 00:00:04.711 [Pipeline] echo 00:00:04.712 Sorcerer 10.211.164.23 is alive 00:00:04.719 [Pipeline] retry 00:00:04.722 [Pipeline] { 00:00:04.736 [Pipeline] httpRequest 00:00:04.741 HttpMethod: GET 00:00:04.741 URL: http://10.211.164.23/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:04.741 Sending request to url: http://10.211.164.23/packages/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:04.744 Response Code: HTTP/1.1 200 OK 00:00:04.744 Success: Status code 200 is in the accepted range: 200,404 00:00:04.745 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:04.890 [Pipeline] } 00:00:04.906 [Pipeline] // retry 00:00:04.913 [Pipeline] sh 00:00:05.196 + tar --no-same-owner -xf jbp_1913354106d3abc3c9aeb027a32277f58731b4dc.tar.gz 00:00:05.212 [Pipeline] httpRequest 00:00:06.204 [Pipeline] echo 00:00:06.205 Sorcerer 10.211.164.23 is alive 00:00:06.213 [Pipeline] retry 00:00:06.215 [Pipeline] { 00:00:06.229 [Pipeline] httpRequest 00:00:06.232 HttpMethod: GET 00:00:06.233 URL: http://10.211.164.23/packages/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:06.233 Sending request to url: http://10.211.164.23/packages/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:06.236 Response Code: HTTP/1.1 200 OK 00:00:06.236 Success: Status code 200 is in the accepted range: 200,404 00:00:06.237 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:30.813 [Pipeline] } 00:00:30.827 [Pipeline] // retry 00:00:30.834 [Pipeline] sh 00:00:31.117 + tar --no-same-owner -xf spdk_3950cd1bb06afd1aee639e4df4d9335440fe2ead.tar.gz 00:00:33.670 [Pipeline] sh 00:00:33.954 + git -C spdk log --oneline -n5 00:00:33.954 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:00:33.954 f9141d271 test/blob: Add BLOCKLEN macro in blob_ut 00:00:33.954 82c46626a lib/event: implement scheduler trace events 00:00:33.954 fa6aec495 lib/thread: register thread owner type for scheduler trace events 00:00:33.954 1876d41a3 include/spdk_internal: define scheduler tracegroup and tracepoints 00:00:33.974 [Pipeline] writeFile 00:00:33.991 [Pipeline] sh 00:00:34.277 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:34.289 [Pipeline] sh 00:00:34.572 + cat autorun-spdk.conf 00:00:34.573 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.573 SPDK_RUN_ASAN=1 00:00:34.573 SPDK_RUN_UBSAN=1 00:00:34.573 SPDK_TEST_RAID=1 00:00:34.573 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:34.580 RUN_NIGHTLY=1 00:00:34.582 [Pipeline] } 00:00:34.596 [Pipeline] // stage 00:00:34.611 [Pipeline] stage 00:00:34.613 [Pipeline] { (Run VM) 00:00:34.625 [Pipeline] sh 00:00:34.908 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:34.908 + echo 'Start stage prepare_nvme.sh' 00:00:34.908 Start stage prepare_nvme.sh 00:00:34.908 + [[ -n 5 ]] 00:00:34.908 + disk_prefix=ex5 00:00:34.908 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:34.908 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:34.908 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:34.908 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.908 ++ SPDK_RUN_ASAN=1 00:00:34.908 ++ SPDK_RUN_UBSAN=1 00:00:34.908 ++ SPDK_TEST_RAID=1 00:00:34.908 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:34.908 ++ RUN_NIGHTLY=1 00:00:34.908 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:34.908 + nvme_files=() 00:00:34.908 + declare -A nvme_files 00:00:34.908 + backend_dir=/var/lib/libvirt/images/backends 00:00:34.908 + nvme_files['nvme.img']=5G 00:00:34.908 + nvme_files['nvme-cmb.img']=5G 00:00:34.908 + nvme_files['nvme-multi0.img']=4G 00:00:34.908 + nvme_files['nvme-multi1.img']=4G 00:00:34.908 + nvme_files['nvme-multi2.img']=4G 00:00:34.908 + nvme_files['nvme-openstack.img']=8G 00:00:34.908 + nvme_files['nvme-zns.img']=5G 00:00:34.908 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:34.908 + (( SPDK_TEST_FTL == 1 )) 00:00:34.908 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:34.908 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:34.908 + for nvme in "${!nvme_files[@]}" 00:00:34.908 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:34.908 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:34.908 + for nvme in "${!nvme_files[@]}" 00:00:34.908 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:34.908 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:34.908 + for nvme in "${!nvme_files[@]}" 00:00:34.908 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:34.908 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:34.908 + for nvme in "${!nvme_files[@]}" 00:00:34.908 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:34.908 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:34.908 + for nvme in "${!nvme_files[@]}" 00:00:34.908 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:34.908 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:34.908 + for nvme in "${!nvme_files[@]}" 00:00:34.908 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:34.908 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:34.908 + for nvme in "${!nvme_files[@]}" 00:00:34.908 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:34.908 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:35.168 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:35.168 + echo 'End stage prepare_nvme.sh' 00:00:35.168 End stage prepare_nvme.sh 00:00:35.180 [Pipeline] sh 00:00:35.463 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:35.463 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:35.463 00:00:35.463 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:35.463 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:35.463 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:35.463 HELP=0 00:00:35.463 DRY_RUN=0 00:00:35.463 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:35.463 NVME_DISKS_TYPE=nvme,nvme, 00:00:35.463 NVME_AUTO_CREATE=0 00:00:35.463 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:35.463 NVME_CMB=,, 00:00:35.463 NVME_PMR=,, 00:00:35.463 NVME_ZNS=,, 00:00:35.463 NVME_MS=,, 00:00:35.463 NVME_FDP=,, 00:00:35.463 SPDK_VAGRANT_DISTRO=fedora39 00:00:35.463 SPDK_VAGRANT_VMCPU=10 00:00:35.463 SPDK_VAGRANT_VMRAM=12288 00:00:35.463 SPDK_VAGRANT_PROVIDER=libvirt 00:00:35.463 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:35.463 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:35.463 SPDK_OPENSTACK_NETWORK=0 00:00:35.463 VAGRANT_PACKAGE_BOX=0 00:00:35.463 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:35.463 FORCE_DISTRO=true 00:00:35.463 VAGRANT_BOX_VERSION= 00:00:35.463 EXTRA_VAGRANTFILES= 00:00:35.463 NIC_MODEL=virtio 00:00:35.463 00:00:35.463 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:35.463 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:37.366 Bringing machine 'default' up with 'libvirt' provider... 00:00:37.932 ==> default: Creating image (snapshot of base box volume). 00:00:38.190 ==> default: Creating domain with the following settings... 00:00:38.190 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728117434_fb983b8e3a75d1e438db 00:00:38.190 ==> default: -- Domain type: kvm 00:00:38.190 ==> default: -- Cpus: 10 00:00:38.190 ==> default: -- Feature: acpi 00:00:38.190 ==> default: -- Feature: apic 00:00:38.190 ==> default: -- Feature: pae 00:00:38.190 ==> default: -- Memory: 12288M 00:00:38.190 ==> default: -- Memory Backing: hugepages: 00:00:38.190 ==> default: -- Management MAC: 00:00:38.190 ==> default: -- Loader: 00:00:38.190 ==> default: -- Nvram: 00:00:38.190 ==> default: -- Base box: spdk/fedora39 00:00:38.190 ==> default: -- Storage pool: default 00:00:38.190 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728117434_fb983b8e3a75d1e438db.img (20G) 00:00:38.190 ==> default: -- Volume Cache: default 00:00:38.190 ==> default: -- Kernel: 00:00:38.190 ==> default: -- Initrd: 00:00:38.190 ==> default: -- Graphics Type: vnc 00:00:38.190 ==> default: -- Graphics Port: -1 00:00:38.190 ==> default: -- Graphics IP: 127.0.0.1 00:00:38.190 ==> default: -- Graphics Password: Not defined 00:00:38.190 ==> default: -- Video Type: cirrus 00:00:38.190 ==> default: -- Video VRAM: 9216 00:00:38.190 ==> default: -- Sound Type: 00:00:38.190 ==> default: -- Keymap: en-us 00:00:38.190 ==> default: -- TPM Path: 00:00:38.190 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:38.190 ==> default: -- Command line args: 00:00:38.190 ==> default: -> value=-device, 00:00:38.190 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:38.190 ==> default: -> value=-drive, 00:00:38.190 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:38.190 ==> default: -> value=-device, 00:00:38.190 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.190 ==> default: -> value=-device, 00:00:38.190 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:38.190 ==> default: -> value=-drive, 00:00:38.190 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:38.190 ==> default: -> value=-device, 00:00:38.190 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.190 ==> default: -> value=-drive, 00:00:38.190 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:38.190 ==> default: -> value=-device, 00:00:38.190 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.190 ==> default: -> value=-drive, 00:00:38.190 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:38.190 ==> default: -> value=-device, 00:00:38.190 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:38.190 ==> default: Creating shared folders metadata... 00:00:38.190 ==> default: Starting domain. 00:00:39.569 ==> default: Waiting for domain to get an IP address... 00:00:57.667 ==> default: Waiting for SSH to become available... 00:00:57.667 ==> default: Configuring and enabling network interfaces... 00:01:04.277 default: SSH address: 192.168.121.4:22 00:01:04.277 default: SSH username: vagrant 00:01:04.277 default: SSH auth method: private key 00:01:06.820 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:14.950 ==> default: Mounting SSHFS shared folder... 00:01:17.494 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:17.494 ==> default: Checking Mount.. 00:01:18.881 ==> default: Folder Successfully Mounted! 00:01:18.881 ==> default: Running provisioner: file... 00:01:19.822 default: ~/.gitconfig => .gitconfig 00:01:20.393 00:01:20.393 SUCCESS! 00:01:20.393 00:01:20.393 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:20.393 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:20.393 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:20.393 00:01:20.403 [Pipeline] } 00:01:20.417 [Pipeline] // stage 00:01:20.426 [Pipeline] dir 00:01:20.427 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:20.428 [Pipeline] { 00:01:20.440 [Pipeline] catchError 00:01:20.442 [Pipeline] { 00:01:20.453 [Pipeline] sh 00:01:20.737 + vagrant ssh-config --host vagrant 00:01:20.737 + sed -ne /^Host/,$p 00:01:20.737 + tee ssh_conf 00:01:23.283 Host vagrant 00:01:23.283 HostName 192.168.121.4 00:01:23.283 User vagrant 00:01:23.283 Port 22 00:01:23.283 UserKnownHostsFile /dev/null 00:01:23.283 StrictHostKeyChecking no 00:01:23.283 PasswordAuthentication no 00:01:23.283 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:23.283 IdentitiesOnly yes 00:01:23.283 LogLevel FATAL 00:01:23.283 ForwardAgent yes 00:01:23.283 ForwardX11 yes 00:01:23.283 00:01:23.298 [Pipeline] withEnv 00:01:23.300 [Pipeline] { 00:01:23.315 [Pipeline] sh 00:01:23.595 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:23.595 source /etc/os-release 00:01:23.595 [[ -e /image.version ]] && img=$(< /image.version) 00:01:23.595 # Minimal, systemd-like check. 00:01:23.595 if [[ -e /.dockerenv ]]; then 00:01:23.595 # Clear garbage from the node's name: 00:01:23.595 # agt-er_autotest_547-896 -> autotest_547-896 00:01:23.595 # $HOSTNAME is the actual container id 00:01:23.595 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:23.595 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:23.595 # We can assume this is a mount from a host where container is running, 00:01:23.595 # so fetch its hostname to easily identify the target swarm worker. 00:01:23.595 container="$(< /etc/hostname) ($agent)" 00:01:23.595 else 00:01:23.595 # Fallback 00:01:23.595 container=$agent 00:01:23.595 fi 00:01:23.595 fi 00:01:23.595 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:23.595 00:01:23.869 [Pipeline] } 00:01:23.886 [Pipeline] // withEnv 00:01:23.894 [Pipeline] setCustomBuildProperty 00:01:23.910 [Pipeline] stage 00:01:23.912 [Pipeline] { (Tests) 00:01:23.931 [Pipeline] sh 00:01:24.244 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:24.518 [Pipeline] sh 00:01:24.802 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:25.081 [Pipeline] timeout 00:01:25.081 Timeout set to expire in 1 hr 30 min 00:01:25.084 [Pipeline] { 00:01:25.097 [Pipeline] sh 00:01:25.381 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:25.951 HEAD is now at 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:01:25.965 [Pipeline] sh 00:01:26.254 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:26.531 [Pipeline] sh 00:01:26.815 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:27.113 [Pipeline] sh 00:01:27.395 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:27.656 ++ readlink -f spdk_repo 00:01:27.656 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:27.656 + [[ -n /home/vagrant/spdk_repo ]] 00:01:27.656 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:27.656 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:27.656 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:27.656 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:27.656 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:27.656 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:27.656 + cd /home/vagrant/spdk_repo 00:01:27.656 + source /etc/os-release 00:01:27.656 ++ NAME='Fedora Linux' 00:01:27.656 ++ VERSION='39 (Cloud Edition)' 00:01:27.656 ++ ID=fedora 00:01:27.656 ++ VERSION_ID=39 00:01:27.656 ++ VERSION_CODENAME= 00:01:27.656 ++ PLATFORM_ID=platform:f39 00:01:27.656 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:27.656 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:27.656 ++ LOGO=fedora-logo-icon 00:01:27.656 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:27.656 ++ HOME_URL=https://fedoraproject.org/ 00:01:27.656 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:27.656 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:27.656 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:27.656 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:27.656 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:27.656 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:27.656 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:27.656 ++ SUPPORT_END=2024-11-12 00:01:27.656 ++ VARIANT='Cloud Edition' 00:01:27.656 ++ VARIANT_ID=cloud 00:01:27.656 + uname -a 00:01:27.656 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:27.656 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:28.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:28.228 Hugepages 00:01:28.228 node hugesize free / total 00:01:28.228 node0 1048576kB 0 / 0 00:01:28.228 node0 2048kB 0 / 0 00:01:28.228 00:01:28.228 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.228 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:28.228 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:28.228 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:28.228 + rm -f /tmp/spdk-ld-path 00:01:28.228 + source autorun-spdk.conf 00:01:28.228 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.228 ++ SPDK_RUN_ASAN=1 00:01:28.228 ++ SPDK_RUN_UBSAN=1 00:01:28.228 ++ SPDK_TEST_RAID=1 00:01:28.228 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.228 ++ RUN_NIGHTLY=1 00:01:28.228 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.228 + [[ -n '' ]] 00:01:28.228 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:28.228 + for M in /var/spdk/build-*-manifest.txt 00:01:28.228 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:28.228 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:28.490 + for M in /var/spdk/build-*-manifest.txt 00:01:28.490 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.490 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:28.490 + for M in /var/spdk/build-*-manifest.txt 00:01:28.490 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.490 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:28.490 ++ uname 00:01:28.490 + [[ Linux == \L\i\n\u\x ]] 00:01:28.490 + sudo dmesg -T 00:01:28.490 + sudo dmesg --clear 00:01:28.490 + dmesg_pid=5429 00:01:28.490 + [[ Fedora Linux == FreeBSD ]] 00:01:28.490 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.490 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.490 + sudo dmesg -Tw 00:01:28.490 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.490 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.490 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.490 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.490 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.490 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.490 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.490 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.490 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.490 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.490 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.490 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.490 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:28.490 Test configuration: 00:01:28.490 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.490 SPDK_RUN_ASAN=1 00:01:28.490 SPDK_RUN_UBSAN=1 00:01:28.490 SPDK_TEST_RAID=1 00:01:28.490 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:28.490 RUN_NIGHTLY=1 08:38:04 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:28.490 08:38:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:28.490 08:38:04 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:28.490 08:38:04 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.490 08:38:04 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.490 08:38:04 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.490 08:38:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.490 08:38:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.490 08:38:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.490 08:38:04 -- paths/export.sh@5 -- $ export PATH 00:01:28.490 08:38:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.490 08:38:04 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:28.490 08:38:04 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:28.751 08:38:04 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728117484.XXXXXX 00:01:28.751 08:38:04 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728117484.mCvVbG 00:01:28.751 08:38:04 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:28.751 08:38:04 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:28.751 08:38:04 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:28.751 08:38:04 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:28.751 08:38:04 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.751 08:38:04 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:28.751 08:38:04 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:28.751 08:38:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.752 08:38:04 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:28.752 08:38:04 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:28.752 08:38:04 -- pm/common@17 -- $ local monitor 00:01:28.752 08:38:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.752 08:38:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.752 08:38:04 -- pm/common@25 -- $ sleep 1 00:01:28.752 08:38:04 -- pm/common@21 -- $ date +%s 00:01:28.752 08:38:04 -- pm/common@21 -- $ date +%s 00:01:28.752 08:38:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728117484 00:01:28.752 08:38:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728117484 00:01:28.752 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728117484_collect-vmstat.pm.log 00:01:28.752 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728117484_collect-cpu-load.pm.log 00:01:29.694 08:38:05 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:29.694 08:38:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.694 08:38:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.694 08:38:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:29.694 08:38:05 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.694 Sat Oct 5 08:38:06 AM UTC 2024 00:01:29.694 08:38:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.694 v25.01-pre-35-g3950cd1bb 00:01:29.694 08:38:06 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:29.694 08:38:06 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:29.694 08:38:06 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:29.694 08:38:06 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:29.694 08:38:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.694 ************************************ 00:01:29.694 START TEST asan 00:01:29.694 ************************************ 00:01:29.694 using asan 00:01:29.694 08:38:06 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:29.694 00:01:29.694 real 0m0.001s 00:01:29.694 user 0m0.000s 00:01:29.694 sys 0m0.001s 00:01:29.694 08:38:06 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:29.694 08:38:06 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.694 ************************************ 00:01:29.694 END TEST asan 00:01:29.694 ************************************ 00:01:29.694 08:38:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.694 08:38:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.694 08:38:06 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:29.694 08:38:06 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:29.694 08:38:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.694 ************************************ 00:01:29.694 START TEST ubsan 00:01:29.694 ************************************ 00:01:29.694 using ubsan 00:01:29.694 08:38:06 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:29.694 00:01:29.694 real 0m0.000s 00:01:29.694 user 0m0.000s 00:01:29.694 sys 0m0.000s 00:01:29.694 08:38:06 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:29.694 08:38:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.694 ************************************ 00:01:29.694 END TEST ubsan 00:01:29.694 ************************************ 00:01:29.954 08:38:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:29.954 08:38:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:29.954 08:38:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:29.954 08:38:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:29.954 08:38:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:29.954 08:38:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:29.954 08:38:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:29.954 08:38:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:29.954 08:38:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:29.954 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:29.954 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:30.525 Using 'verbs' RDMA provider 00:01:46.408 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:04.518 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:04.518 Creating mk/config.mk...done. 00:02:04.518 Creating mk/cc.flags.mk...done. 00:02:04.518 Type 'make' to build. 00:02:04.518 08:38:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:04.518 08:38:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:04.518 08:38:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:04.518 08:38:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.518 ************************************ 00:02:04.518 START TEST make 00:02:04.518 ************************************ 00:02:04.518 08:38:38 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:04.518 make[1]: Nothing to be done for 'all'. 00:02:12.710 The Meson build system 00:02:12.710 Version: 1.5.0 00:02:12.710 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:12.710 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:12.710 Build type: native build 00:02:12.710 Program cat found: YES (/usr/bin/cat) 00:02:12.710 Project name: DPDK 00:02:12.710 Project version: 24.03.0 00:02:12.710 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:12.710 C linker for the host machine: cc ld.bfd 2.40-14 00:02:12.710 Host machine cpu family: x86_64 00:02:12.710 Host machine cpu: x86_64 00:02:12.710 Message: ## Building in Developer Mode ## 00:02:12.710 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.710 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:12.710 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.710 Program python3 found: YES (/usr/bin/python3) 00:02:12.710 Program cat found: YES (/usr/bin/cat) 00:02:12.710 Compiler for C supports arguments -march=native: YES 00:02:12.710 Checking for size of "void *" : 8 00:02:12.710 Checking for size of "void *" : 8 (cached) 00:02:12.710 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:12.710 Library m found: YES 00:02:12.710 Library numa found: YES 00:02:12.710 Has header "numaif.h" : YES 00:02:12.710 Library fdt found: NO 00:02:12.710 Library execinfo found: NO 00:02:12.710 Has header "execinfo.h" : YES 00:02:12.710 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.710 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.710 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.710 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.710 Run-time dependency openssl found: YES 3.1.1 00:02:12.710 Run-time dependency libpcap found: YES 1.10.4 00:02:12.710 Has header "pcap.h" with dependency libpcap: YES 00:02:12.710 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.710 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.710 Compiler for C supports arguments -Wformat: YES 00:02:12.710 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:12.710 Compiler for C supports arguments -Wformat-security: NO 00:02:12.710 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.710 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.710 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.710 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.710 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.710 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.710 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.710 Compiler for C supports arguments -Wundef: YES 00:02:12.710 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.710 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.710 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:12.710 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.710 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:12.710 Program objdump found: YES (/usr/bin/objdump) 00:02:12.710 Compiler for C supports arguments -mavx512f: YES 00:02:12.710 Checking if "AVX512 checking" compiles: YES 00:02:12.710 Fetching value of define "__SSE4_2__" : 1 00:02:12.710 Fetching value of define "__AES__" : 1 00:02:12.710 Fetching value of define "__AVX__" : 1 00:02:12.710 Fetching value of define "__AVX2__" : 1 00:02:12.710 Fetching value of define "__AVX512BW__" : 1 00:02:12.710 Fetching value of define "__AVX512CD__" : 1 00:02:12.710 Fetching value of define "__AVX512DQ__" : 1 00:02:12.710 Fetching value of define "__AVX512F__" : 1 00:02:12.710 Fetching value of define "__AVX512VL__" : 1 00:02:12.710 Fetching value of define "__PCLMUL__" : 1 00:02:12.710 Fetching value of define "__RDRND__" : 1 00:02:12.710 Fetching value of define "__RDSEED__" : 1 00:02:12.710 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:12.710 Fetching value of define "__znver1__" : (undefined) 00:02:12.710 Fetching value of define "__znver2__" : (undefined) 00:02:12.710 Fetching value of define "__znver3__" : (undefined) 00:02:12.710 Fetching value of define "__znver4__" : (undefined) 00:02:12.710 Library asan found: YES 00:02:12.710 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:12.710 Message: lib/log: Defining dependency "log" 00:02:12.710 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.710 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.710 Library rt found: YES 00:02:12.710 Checking for function "getentropy" : NO 00:02:12.710 Message: lib/eal: Defining dependency "eal" 00:02:12.710 Message: lib/ring: Defining dependency "ring" 00:02:12.710 Message: lib/rcu: Defining dependency "rcu" 00:02:12.710 Message: lib/mempool: Defining dependency "mempool" 00:02:12.710 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.710 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.710 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.710 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.710 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.710 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.710 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:12.710 Compiler for C supports arguments -mpclmul: YES 00:02:12.710 Compiler for C supports arguments -maes: YES 00:02:12.710 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.710 Compiler for C supports arguments -mavx512bw: YES 00:02:12.710 Compiler for C supports arguments -mavx512dq: YES 00:02:12.710 Compiler for C supports arguments -mavx512vl: YES 00:02:12.710 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.710 Compiler for C supports arguments -mavx2: YES 00:02:12.710 Compiler for C supports arguments -mavx: YES 00:02:12.710 Message: lib/net: Defining dependency "net" 00:02:12.710 Message: lib/meter: Defining dependency "meter" 00:02:12.710 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.710 Message: lib/pci: Defining dependency "pci" 00:02:12.710 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.710 Message: lib/hash: Defining dependency "hash" 00:02:12.710 Message: lib/timer: Defining dependency "timer" 00:02:12.710 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.710 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.710 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.710 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.710 Message: lib/power: Defining dependency "power" 00:02:12.710 Message: lib/reorder: Defining dependency "reorder" 00:02:12.710 Message: lib/security: Defining dependency "security" 00:02:12.710 Has header "linux/userfaultfd.h" : YES 00:02:12.710 Has header "linux/vduse.h" : YES 00:02:12.710 Message: lib/vhost: Defining dependency "vhost" 00:02:12.710 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.710 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.710 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.710 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.710 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:12.710 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:12.710 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:12.710 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:12.710 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:12.711 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:12.711 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.711 Configuring doxy-api-html.conf using configuration 00:02:12.711 Configuring doxy-api-man.conf using configuration 00:02:12.711 Program mandb found: YES (/usr/bin/mandb) 00:02:12.711 Program sphinx-build found: NO 00:02:12.711 Configuring rte_build_config.h using configuration 00:02:12.711 Message: 00:02:12.711 ================= 00:02:12.711 Applications Enabled 00:02:12.711 ================= 00:02:12.711 00:02:12.711 apps: 00:02:12.711 00:02:12.711 00:02:12.711 Message: 00:02:12.711 ================= 00:02:12.711 Libraries Enabled 00:02:12.711 ================= 00:02:12.711 00:02:12.711 libs: 00:02:12.711 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.711 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:12.711 cryptodev, dmadev, power, reorder, security, vhost, 00:02:12.711 00:02:12.711 Message: 00:02:12.711 =============== 00:02:12.711 Drivers Enabled 00:02:12.711 =============== 00:02:12.711 00:02:12.711 common: 00:02:12.711 00:02:12.711 bus: 00:02:12.711 pci, vdev, 00:02:12.711 mempool: 00:02:12.711 ring, 00:02:12.711 dma: 00:02:12.711 00:02:12.711 net: 00:02:12.711 00:02:12.711 crypto: 00:02:12.711 00:02:12.711 compress: 00:02:12.711 00:02:12.711 vdpa: 00:02:12.711 00:02:12.711 00:02:12.711 Message: 00:02:12.711 ================= 00:02:12.711 Content Skipped 00:02:12.711 ================= 00:02:12.711 00:02:12.711 apps: 00:02:12.711 dumpcap: explicitly disabled via build config 00:02:12.711 graph: explicitly disabled via build config 00:02:12.711 pdump: explicitly disabled via build config 00:02:12.711 proc-info: explicitly disabled via build config 00:02:12.711 test-acl: explicitly disabled via build config 00:02:12.711 test-bbdev: explicitly disabled via build config 00:02:12.711 test-cmdline: explicitly disabled via build config 00:02:12.711 test-compress-perf: explicitly disabled via build config 00:02:12.711 test-crypto-perf: explicitly disabled via build config 00:02:12.711 test-dma-perf: explicitly disabled via build config 00:02:12.711 test-eventdev: explicitly disabled via build config 00:02:12.711 test-fib: explicitly disabled via build config 00:02:12.711 test-flow-perf: explicitly disabled via build config 00:02:12.711 test-gpudev: explicitly disabled via build config 00:02:12.711 test-mldev: explicitly disabled via build config 00:02:12.711 test-pipeline: explicitly disabled via build config 00:02:12.711 test-pmd: explicitly disabled via build config 00:02:12.711 test-regex: explicitly disabled via build config 00:02:12.711 test-sad: explicitly disabled via build config 00:02:12.711 test-security-perf: explicitly disabled via build config 00:02:12.711 00:02:12.711 libs: 00:02:12.711 argparse: explicitly disabled via build config 00:02:12.711 metrics: explicitly disabled via build config 00:02:12.711 acl: explicitly disabled via build config 00:02:12.711 bbdev: explicitly disabled via build config 00:02:12.711 bitratestats: explicitly disabled via build config 00:02:12.711 bpf: explicitly disabled via build config 00:02:12.711 cfgfile: explicitly disabled via build config 00:02:12.711 distributor: explicitly disabled via build config 00:02:12.711 efd: explicitly disabled via build config 00:02:12.711 eventdev: explicitly disabled via build config 00:02:12.711 dispatcher: explicitly disabled via build config 00:02:12.711 gpudev: explicitly disabled via build config 00:02:12.711 gro: explicitly disabled via build config 00:02:12.711 gso: explicitly disabled via build config 00:02:12.711 ip_frag: explicitly disabled via build config 00:02:12.711 jobstats: explicitly disabled via build config 00:02:12.711 latencystats: explicitly disabled via build config 00:02:12.711 lpm: explicitly disabled via build config 00:02:12.711 member: explicitly disabled via build config 00:02:12.711 pcapng: explicitly disabled via build config 00:02:12.711 rawdev: explicitly disabled via build config 00:02:12.711 regexdev: explicitly disabled via build config 00:02:12.711 mldev: explicitly disabled via build config 00:02:12.711 rib: explicitly disabled via build config 00:02:12.711 sched: explicitly disabled via build config 00:02:12.711 stack: explicitly disabled via build config 00:02:12.711 ipsec: explicitly disabled via build config 00:02:12.711 pdcp: explicitly disabled via build config 00:02:12.711 fib: explicitly disabled via build config 00:02:12.711 port: explicitly disabled via build config 00:02:12.711 pdump: explicitly disabled via build config 00:02:12.711 table: explicitly disabled via build config 00:02:12.711 pipeline: explicitly disabled via build config 00:02:12.711 graph: explicitly disabled via build config 00:02:12.711 node: explicitly disabled via build config 00:02:12.711 00:02:12.711 drivers: 00:02:12.711 common/cpt: not in enabled drivers build config 00:02:12.711 common/dpaax: not in enabled drivers build config 00:02:12.711 common/iavf: not in enabled drivers build config 00:02:12.711 common/idpf: not in enabled drivers build config 00:02:12.711 common/ionic: not in enabled drivers build config 00:02:12.711 common/mvep: not in enabled drivers build config 00:02:12.711 common/octeontx: not in enabled drivers build config 00:02:12.711 bus/auxiliary: not in enabled drivers build config 00:02:12.711 bus/cdx: not in enabled drivers build config 00:02:12.711 bus/dpaa: not in enabled drivers build config 00:02:12.711 bus/fslmc: not in enabled drivers build config 00:02:12.711 bus/ifpga: not in enabled drivers build config 00:02:12.711 bus/platform: not in enabled drivers build config 00:02:12.711 bus/uacce: not in enabled drivers build config 00:02:12.711 bus/vmbus: not in enabled drivers build config 00:02:12.711 common/cnxk: not in enabled drivers build config 00:02:12.711 common/mlx5: not in enabled drivers build config 00:02:12.711 common/nfp: not in enabled drivers build config 00:02:12.711 common/nitrox: not in enabled drivers build config 00:02:12.711 common/qat: not in enabled drivers build config 00:02:12.711 common/sfc_efx: not in enabled drivers build config 00:02:12.711 mempool/bucket: not in enabled drivers build config 00:02:12.711 mempool/cnxk: not in enabled drivers build config 00:02:12.711 mempool/dpaa: not in enabled drivers build config 00:02:12.711 mempool/dpaa2: not in enabled drivers build config 00:02:12.711 mempool/octeontx: not in enabled drivers build config 00:02:12.711 mempool/stack: not in enabled drivers build config 00:02:12.711 dma/cnxk: not in enabled drivers build config 00:02:12.711 dma/dpaa: not in enabled drivers build config 00:02:12.711 dma/dpaa2: not in enabled drivers build config 00:02:12.711 dma/hisilicon: not in enabled drivers build config 00:02:12.711 dma/idxd: not in enabled drivers build config 00:02:12.711 dma/ioat: not in enabled drivers build config 00:02:12.711 dma/skeleton: not in enabled drivers build config 00:02:12.711 net/af_packet: not in enabled drivers build config 00:02:12.711 net/af_xdp: not in enabled drivers build config 00:02:12.711 net/ark: not in enabled drivers build config 00:02:12.711 net/atlantic: not in enabled drivers build config 00:02:12.711 net/avp: not in enabled drivers build config 00:02:12.711 net/axgbe: not in enabled drivers build config 00:02:12.711 net/bnx2x: not in enabled drivers build config 00:02:12.711 net/bnxt: not in enabled drivers build config 00:02:12.711 net/bonding: not in enabled drivers build config 00:02:12.711 net/cnxk: not in enabled drivers build config 00:02:12.711 net/cpfl: not in enabled drivers build config 00:02:12.711 net/cxgbe: not in enabled drivers build config 00:02:12.711 net/dpaa: not in enabled drivers build config 00:02:12.711 net/dpaa2: not in enabled drivers build config 00:02:12.711 net/e1000: not in enabled drivers build config 00:02:12.711 net/ena: not in enabled drivers build config 00:02:12.711 net/enetc: not in enabled drivers build config 00:02:12.711 net/enetfec: not in enabled drivers build config 00:02:12.711 net/enic: not in enabled drivers build config 00:02:12.711 net/failsafe: not in enabled drivers build config 00:02:12.711 net/fm10k: not in enabled drivers build config 00:02:12.711 net/gve: not in enabled drivers build config 00:02:12.711 net/hinic: not in enabled drivers build config 00:02:12.711 net/hns3: not in enabled drivers build config 00:02:12.711 net/i40e: not in enabled drivers build config 00:02:12.711 net/iavf: not in enabled drivers build config 00:02:12.711 net/ice: not in enabled drivers build config 00:02:12.711 net/idpf: not in enabled drivers build config 00:02:12.711 net/igc: not in enabled drivers build config 00:02:12.711 net/ionic: not in enabled drivers build config 00:02:12.711 net/ipn3ke: not in enabled drivers build config 00:02:12.711 net/ixgbe: not in enabled drivers build config 00:02:12.711 net/mana: not in enabled drivers build config 00:02:12.711 net/memif: not in enabled drivers build config 00:02:12.711 net/mlx4: not in enabled drivers build config 00:02:12.711 net/mlx5: not in enabled drivers build config 00:02:12.711 net/mvneta: not in enabled drivers build config 00:02:12.711 net/mvpp2: not in enabled drivers build config 00:02:12.711 net/netvsc: not in enabled drivers build config 00:02:12.711 net/nfb: not in enabled drivers build config 00:02:12.711 net/nfp: not in enabled drivers build config 00:02:12.711 net/ngbe: not in enabled drivers build config 00:02:12.711 net/null: not in enabled drivers build config 00:02:12.711 net/octeontx: not in enabled drivers build config 00:02:12.711 net/octeon_ep: not in enabled drivers build config 00:02:12.711 net/pcap: not in enabled drivers build config 00:02:12.711 net/pfe: not in enabled drivers build config 00:02:12.711 net/qede: not in enabled drivers build config 00:02:12.711 net/ring: not in enabled drivers build config 00:02:12.711 net/sfc: not in enabled drivers build config 00:02:12.711 net/softnic: not in enabled drivers build config 00:02:12.711 net/tap: not in enabled drivers build config 00:02:12.711 net/thunderx: not in enabled drivers build config 00:02:12.711 net/txgbe: not in enabled drivers build config 00:02:12.711 net/vdev_netvsc: not in enabled drivers build config 00:02:12.711 net/vhost: not in enabled drivers build config 00:02:12.711 net/virtio: not in enabled drivers build config 00:02:12.711 net/vmxnet3: not in enabled drivers build config 00:02:12.712 raw/*: missing internal dependency, "rawdev" 00:02:12.712 crypto/armv8: not in enabled drivers build config 00:02:12.712 crypto/bcmfs: not in enabled drivers build config 00:02:12.712 crypto/caam_jr: not in enabled drivers build config 00:02:12.712 crypto/ccp: not in enabled drivers build config 00:02:12.712 crypto/cnxk: not in enabled drivers build config 00:02:12.712 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.712 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.712 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.712 crypto/mlx5: not in enabled drivers build config 00:02:12.712 crypto/mvsam: not in enabled drivers build config 00:02:12.712 crypto/nitrox: not in enabled drivers build config 00:02:12.712 crypto/null: not in enabled drivers build config 00:02:12.712 crypto/octeontx: not in enabled drivers build config 00:02:12.712 crypto/openssl: not in enabled drivers build config 00:02:12.712 crypto/scheduler: not in enabled drivers build config 00:02:12.712 crypto/uadk: not in enabled drivers build config 00:02:12.712 crypto/virtio: not in enabled drivers build config 00:02:12.712 compress/isal: not in enabled drivers build config 00:02:12.712 compress/mlx5: not in enabled drivers build config 00:02:12.712 compress/nitrox: not in enabled drivers build config 00:02:12.712 compress/octeontx: not in enabled drivers build config 00:02:12.712 compress/zlib: not in enabled drivers build config 00:02:12.712 regex/*: missing internal dependency, "regexdev" 00:02:12.712 ml/*: missing internal dependency, "mldev" 00:02:12.712 vdpa/ifc: not in enabled drivers build config 00:02:12.712 vdpa/mlx5: not in enabled drivers build config 00:02:12.712 vdpa/nfp: not in enabled drivers build config 00:02:12.712 vdpa/sfc: not in enabled drivers build config 00:02:12.712 event/*: missing internal dependency, "eventdev" 00:02:12.712 baseband/*: missing internal dependency, "bbdev" 00:02:12.712 gpu/*: missing internal dependency, "gpudev" 00:02:12.712 00:02:12.712 00:02:12.712 Build targets in project: 85 00:02:12.712 00:02:12.712 DPDK 24.03.0 00:02:12.712 00:02:12.712 User defined options 00:02:12.712 buildtype : debug 00:02:12.712 default_library : shared 00:02:12.712 libdir : lib 00:02:12.712 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:12.712 b_sanitize : address 00:02:12.712 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:12.712 c_link_args : 00:02:12.712 cpu_instruction_set: native 00:02:12.712 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:12.712 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:12.712 enable_docs : false 00:02:12.712 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:12.712 enable_kmods : false 00:02:12.712 max_lcores : 128 00:02:12.712 tests : false 00:02:12.712 00:02:12.712 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.972 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:12.972 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:12.972 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:12.972 [3/268] Linking static target lib/librte_kvargs.a 00:02:13.233 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.233 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.233 [6/268] Linking static target lib/librte_log.a 00:02:13.493 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:13.493 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:13.493 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:13.493 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.493 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:13.493 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.493 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:13.493 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:13.754 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:13.754 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:13.754 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:13.754 [18/268] Linking static target lib/librte_telemetry.a 00:02:14.014 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.014 [20/268] Linking target lib/librte_log.so.24.1 00:02:14.014 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.014 [22/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:14.014 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:14.014 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:14.014 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:14.274 [26/268] Linking target lib/librte_kvargs.so.24.1 00:02:14.274 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:14.274 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.274 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:14.274 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.274 [31/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:14.274 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:14.535 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:14.535 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.535 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.535 [36/268] Linking target lib/librte_telemetry.so.24.1 00:02:14.535 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.795 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.795 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.795 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.795 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.795 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.795 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.795 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:14.795 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.056 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.056 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.316 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.316 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.316 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.316 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.316 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.316 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.576 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.576 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.576 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.576 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.576 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.835 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.835 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.835 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.835 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.835 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.835 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.095 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:16.095 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.095 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.355 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.355 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.355 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.355 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.355 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.355 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.615 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.615 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.615 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.615 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.615 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.615 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.875 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.875 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:17.135 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:17.135 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:17.135 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.135 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.394 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.394 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:17.394 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.394 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.394 [90/268] Linking static target lib/librte_ring.a 00:02:17.394 [91/268] Linking static target lib/librte_mempool.a 00:02:17.394 [92/268] Linking static target lib/librte_eal.a 00:02:17.654 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.654 [94/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:17.654 [95/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:17.654 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.654 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.654 [98/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.654 [99/268] Linking static target lib/librte_rcu.a 00:02:17.654 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.914 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.914 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.173 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.173 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:18.173 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.173 [106/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.173 [107/268] Linking static target lib/librte_net.a 00:02:18.173 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.173 [109/268] Linking static target lib/librte_meter.a 00:02:18.434 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.434 [111/268] Linking static target lib/librte_mbuf.a 00:02:18.434 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.434 [113/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.434 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.702 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.702 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.702 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.702 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.977 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:18.977 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:18.977 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:19.237 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.237 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.237 [124/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.497 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.497 [126/268] Linking static target lib/librte_pci.a 00:02:19.497 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.497 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.497 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.497 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:19.497 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.757 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.757 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:19.757 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.757 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.757 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.757 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.757 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.757 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.757 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:20.017 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:20.017 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:20.017 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.017 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:20.017 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:20.017 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:20.017 [147/268] Linking static target lib/librte_cmdline.a 00:02:20.277 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:20.277 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.277 [150/268] Linking static target lib/librte_timer.a 00:02:20.537 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:20.537 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.537 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.797 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.797 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.797 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.797 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:21.056 [158/268] Linking static target lib/librte_ethdev.a 00:02:21.057 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.057 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:21.057 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:21.057 [162/268] Linking static target lib/librte_compressdev.a 00:02:21.057 [163/268] Linking static target lib/librte_hash.a 00:02:21.057 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.057 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:21.316 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.316 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:21.316 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.316 [169/268] Linking static target lib/librte_dmadev.a 00:02:21.577 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:21.577 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.577 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.577 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.837 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.837 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.099 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:22.099 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.099 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.099 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.099 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.099 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.099 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.360 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.360 [184/268] Linking static target lib/librte_cryptodev.a 00:02:22.360 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.360 [186/268] Linking static target lib/librte_power.a 00:02:22.620 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.620 [188/268] Linking static target lib/librte_reorder.a 00:02:22.621 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.621 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:22.880 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:23.140 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:23.140 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:23.140 [194/268] Linking static target lib/librte_security.a 00:02:23.140 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.711 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.711 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:23.711 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:23.711 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.711 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:23.711 [201/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.971 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:24.231 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:24.231 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.231 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.231 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:24.491 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.491 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:24.491 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.491 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.491 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.752 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:24.752 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:24.752 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.752 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.752 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.752 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:24.752 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.752 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:24.752 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:24.752 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:25.012 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.012 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:25.012 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.012 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.012 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:25.272 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.237 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.146 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.146 [230/268] Linking target lib/librte_eal.so.24.1 00:02:28.146 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:28.146 [232/268] Linking target lib/librte_timer.so.24.1 00:02:28.146 [233/268] Linking target lib/librte_pci.so.24.1 00:02:28.406 [234/268] Linking target lib/librte_ring.so.24.1 00:02:28.406 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:28.406 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:28.406 [237/268] Linking target lib/librte_meter.so.24.1 00:02:28.406 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:28.406 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:28.406 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:28.406 [241/268] Linking target lib/librte_rcu.so.24.1 00:02:28.406 [242/268] Linking target lib/librte_mempool.so.24.1 00:02:28.406 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:28.406 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:28.406 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:28.665 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:28.665 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:28.665 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:28.665 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:28.665 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:28.925 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:28.925 [252/268] Linking target lib/librte_net.so.24.1 00:02:28.925 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:28.925 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:28.925 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:28.925 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:28.925 [257/268] Linking target lib/librte_hash.so.24.1 00:02:28.925 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:28.925 [259/268] Linking target lib/librte_security.so.24.1 00:02:29.185 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:29.445 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.703 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:29.703 [263/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:29.703 [264/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:29.703 [265/268] Linking static target lib/librte_vhost.a 00:02:29.962 [266/268] Linking target lib/librte_power.so.24.1 00:02:32.501 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.501 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:32.501 INFO: autodetecting backend as ninja 00:02:32.501 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:54.456 CC lib/ut/ut.o 00:02:54.456 CC lib/log/log.o 00:02:54.456 CC lib/log/log_flags.o 00:02:54.456 CC lib/log/log_deprecated.o 00:02:54.456 CC lib/ut_mock/mock.o 00:02:54.456 LIB libspdk_ut.a 00:02:54.456 LIB libspdk_log.a 00:02:54.456 SO libspdk_ut.so.2.0 00:02:54.456 LIB libspdk_ut_mock.a 00:02:54.456 SO libspdk_log.so.7.0 00:02:54.456 SO libspdk_ut_mock.so.6.0 00:02:54.456 SYMLINK libspdk_ut.so 00:02:54.456 SYMLINK libspdk_log.so 00:02:54.456 SYMLINK libspdk_ut_mock.so 00:02:54.456 CC lib/ioat/ioat.o 00:02:54.456 CC lib/dma/dma.o 00:02:54.456 CXX lib/trace_parser/trace.o 00:02:54.456 CC lib/util/base64.o 00:02:54.456 CC lib/util/bit_array.o 00:02:54.456 CC lib/util/crc32.o 00:02:54.456 CC lib/util/cpuset.o 00:02:54.456 CC lib/util/crc16.o 00:02:54.456 CC lib/util/crc32c.o 00:02:54.456 CC lib/vfio_user/host/vfio_user_pci.o 00:02:54.456 CC lib/util/crc32_ieee.o 00:02:54.456 CC lib/util/crc64.o 00:02:54.456 CC lib/util/dif.o 00:02:54.456 LIB libspdk_dma.a 00:02:54.456 CC lib/vfio_user/host/vfio_user.o 00:02:54.456 CC lib/util/fd.o 00:02:54.456 SO libspdk_dma.so.5.0 00:02:54.456 CC lib/util/fd_group.o 00:02:54.456 CC lib/util/file.o 00:02:54.456 LIB libspdk_ioat.a 00:02:54.456 CC lib/util/hexlify.o 00:02:54.456 SYMLINK libspdk_dma.so 00:02:54.456 CC lib/util/iov.o 00:02:54.456 SO libspdk_ioat.so.7.0 00:02:54.456 SYMLINK libspdk_ioat.so 00:02:54.456 CC lib/util/math.o 00:02:54.456 CC lib/util/net.o 00:02:54.456 CC lib/util/pipe.o 00:02:54.456 LIB libspdk_vfio_user.a 00:02:54.456 CC lib/util/strerror_tls.o 00:02:54.456 SO libspdk_vfio_user.so.5.0 00:02:54.456 CC lib/util/string.o 00:02:54.456 SYMLINK libspdk_vfio_user.so 00:02:54.456 CC lib/util/uuid.o 00:02:54.456 CC lib/util/xor.o 00:02:54.456 CC lib/util/zipf.o 00:02:54.456 CC lib/util/md5.o 00:02:54.456 LIB libspdk_util.a 00:02:54.456 SO libspdk_util.so.10.0 00:02:54.456 LIB libspdk_trace_parser.a 00:02:54.456 SO libspdk_trace_parser.so.6.0 00:02:54.456 SYMLINK libspdk_util.so 00:02:54.456 SYMLINK libspdk_trace_parser.so 00:02:54.456 CC lib/rdma_provider/common.o 00:02:54.456 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:54.456 CC lib/env_dpdk/env.o 00:02:54.456 CC lib/env_dpdk/pci.o 00:02:54.456 CC lib/env_dpdk/memory.o 00:02:54.456 CC lib/rdma_utils/rdma_utils.o 00:02:54.456 CC lib/vmd/vmd.o 00:02:54.456 CC lib/json/json_parse.o 00:02:54.456 CC lib/conf/conf.o 00:02:54.456 CC lib/idxd/idxd.o 00:02:54.456 CC lib/idxd/idxd_user.o 00:02:54.456 LIB libspdk_conf.a 00:02:54.456 LIB libspdk_rdma_provider.a 00:02:54.456 CC lib/json/json_util.o 00:02:54.456 SO libspdk_conf.so.6.0 00:02:54.456 SO libspdk_rdma_provider.so.6.0 00:02:54.456 LIB libspdk_rdma_utils.a 00:02:54.456 SO libspdk_rdma_utils.so.1.0 00:02:54.456 SYMLINK libspdk_conf.so 00:02:54.456 SYMLINK libspdk_rdma_provider.so 00:02:54.456 CC lib/vmd/led.o 00:02:54.456 CC lib/json/json_write.o 00:02:54.456 SYMLINK libspdk_rdma_utils.so 00:02:54.456 CC lib/env_dpdk/init.o 00:02:54.456 CC lib/env_dpdk/threads.o 00:02:54.456 CC lib/idxd/idxd_kernel.o 00:02:54.456 CC lib/env_dpdk/pci_ioat.o 00:02:54.456 CC lib/env_dpdk/pci_virtio.o 00:02:54.456 CC lib/env_dpdk/pci_vmd.o 00:02:54.456 CC lib/env_dpdk/pci_idxd.o 00:02:54.456 CC lib/env_dpdk/pci_event.o 00:02:54.456 CC lib/env_dpdk/sigbus_handler.o 00:02:54.456 LIB libspdk_json.a 00:02:54.456 CC lib/env_dpdk/pci_dpdk.o 00:02:54.456 SO libspdk_json.so.6.0 00:02:54.456 SYMLINK libspdk_json.so 00:02:54.456 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:54.456 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:54.456 LIB libspdk_vmd.a 00:02:54.456 LIB libspdk_idxd.a 00:02:54.456 SO libspdk_vmd.so.6.0 00:02:54.456 SO libspdk_idxd.so.12.1 00:02:54.456 SYMLINK libspdk_vmd.so 00:02:54.456 SYMLINK libspdk_idxd.so 00:02:54.456 CC lib/jsonrpc/jsonrpc_server.o 00:02:54.456 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:54.456 CC lib/jsonrpc/jsonrpc_client.o 00:02:54.456 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:54.456 LIB libspdk_jsonrpc.a 00:02:54.456 SO libspdk_jsonrpc.so.6.0 00:02:54.456 SYMLINK libspdk_jsonrpc.so 00:02:54.456 LIB libspdk_env_dpdk.a 00:02:54.456 CC lib/rpc/rpc.o 00:02:54.456 SO libspdk_env_dpdk.so.15.0 00:02:54.456 SYMLINK libspdk_env_dpdk.so 00:02:54.456 LIB libspdk_rpc.a 00:02:54.456 SO libspdk_rpc.so.6.0 00:02:54.456 SYMLINK libspdk_rpc.so 00:02:55.025 CC lib/notify/notify.o 00:02:55.025 CC lib/notify/notify_rpc.o 00:02:55.025 CC lib/keyring/keyring.o 00:02:55.025 CC lib/keyring/keyring_rpc.o 00:02:55.025 CC lib/trace/trace.o 00:02:55.025 CC lib/trace/trace_flags.o 00:02:55.025 CC lib/trace/trace_rpc.o 00:02:55.025 LIB libspdk_notify.a 00:02:55.025 SO libspdk_notify.so.6.0 00:02:55.287 LIB libspdk_keyring.a 00:02:55.287 SYMLINK libspdk_notify.so 00:02:55.287 LIB libspdk_trace.a 00:02:55.287 SO libspdk_keyring.so.2.0 00:02:55.287 SO libspdk_trace.so.11.0 00:02:55.287 SYMLINK libspdk_keyring.so 00:02:55.287 SYMLINK libspdk_trace.so 00:02:55.547 CC lib/sock/sock.o 00:02:55.547 CC lib/sock/sock_rpc.o 00:02:55.806 CC lib/thread/thread.o 00:02:55.806 CC lib/thread/iobuf.o 00:02:56.065 LIB libspdk_sock.a 00:02:56.065 SO libspdk_sock.so.10.0 00:02:56.324 SYMLINK libspdk_sock.so 00:02:56.583 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:56.583 CC lib/nvme/nvme_ctrlr.o 00:02:56.583 CC lib/nvme/nvme_fabric.o 00:02:56.583 CC lib/nvme/nvme_ns_cmd.o 00:02:56.583 CC lib/nvme/nvme_ns.o 00:02:56.583 CC lib/nvme/nvme_pcie.o 00:02:56.583 CC lib/nvme/nvme_pcie_common.o 00:02:56.583 CC lib/nvme/nvme.o 00:02:56.583 CC lib/nvme/nvme_qpair.o 00:02:57.151 LIB libspdk_thread.a 00:02:57.151 SO libspdk_thread.so.10.2 00:02:57.151 CC lib/nvme/nvme_quirks.o 00:02:57.151 SYMLINK libspdk_thread.so 00:02:57.151 CC lib/nvme/nvme_transport.o 00:02:57.435 CC lib/nvme/nvme_discovery.o 00:02:57.435 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.435 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:57.435 CC lib/nvme/nvme_tcp.o 00:02:57.435 CC lib/nvme/nvme_opal.o 00:02:57.435 CC lib/nvme/nvme_io_msg.o 00:02:57.695 CC lib/nvme/nvme_poll_group.o 00:02:57.695 CC lib/nvme/nvme_zns.o 00:02:57.956 CC lib/nvme/nvme_stubs.o 00:02:57.956 CC lib/accel/accel.o 00:02:57.956 CC lib/nvme/nvme_auth.o 00:02:57.956 CC lib/blob/blobstore.o 00:02:57.956 CC lib/blob/request.o 00:02:57.956 CC lib/blob/zeroes.o 00:02:58.216 CC lib/blob/blob_bs_dev.o 00:02:58.216 CC lib/accel/accel_rpc.o 00:02:58.475 CC lib/accel/accel_sw.o 00:02:58.475 CC lib/nvme/nvme_cuse.o 00:02:58.475 CC lib/init/json_config.o 00:02:58.475 CC lib/virtio/virtio.o 00:02:58.475 CC lib/fsdev/fsdev.o 00:02:58.735 CC lib/init/subsystem.o 00:02:58.735 CC lib/fsdev/fsdev_io.o 00:02:58.735 CC lib/init/subsystem_rpc.o 00:02:58.995 CC lib/virtio/virtio_vhost_user.o 00:02:58.995 CC lib/nvme/nvme_rdma.o 00:02:58.995 CC lib/fsdev/fsdev_rpc.o 00:02:58.995 CC lib/init/rpc.o 00:02:58.995 CC lib/virtio/virtio_vfio_user.o 00:02:58.995 LIB libspdk_accel.a 00:02:59.255 CC lib/virtio/virtio_pci.o 00:02:59.255 SO libspdk_accel.so.16.0 00:02:59.255 LIB libspdk_fsdev.a 00:02:59.255 LIB libspdk_init.a 00:02:59.255 SYMLINK libspdk_accel.so 00:02:59.255 SO libspdk_fsdev.so.1.0 00:02:59.255 SO libspdk_init.so.6.0 00:02:59.255 SYMLINK libspdk_fsdev.so 00:02:59.255 SYMLINK libspdk_init.so 00:02:59.515 LIB libspdk_virtio.a 00:02:59.515 CC lib/bdev/bdev.o 00:02:59.515 CC lib/bdev/part.o 00:02:59.515 CC lib/bdev/bdev_zone.o 00:02:59.515 CC lib/bdev/bdev_rpc.o 00:02:59.515 CC lib/bdev/scsi_nvme.o 00:02:59.515 SO libspdk_virtio.so.7.0 00:02:59.515 SYMLINK libspdk_virtio.so 00:02:59.515 CC lib/event/app.o 00:02:59.515 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:59.515 CC lib/event/reactor.o 00:02:59.776 CC lib/event/log_rpc.o 00:02:59.776 CC lib/event/app_rpc.o 00:02:59.776 CC lib/event/scheduler_static.o 00:03:00.035 LIB libspdk_event.a 00:03:00.035 SO libspdk_event.so.15.0 00:03:00.295 SYMLINK libspdk_event.so 00:03:00.295 LIB libspdk_fuse_dispatcher.a 00:03:00.295 SO libspdk_fuse_dispatcher.so.1.0 00:03:00.295 SYMLINK libspdk_fuse_dispatcher.so 00:03:00.295 LIB libspdk_nvme.a 00:03:00.554 SO libspdk_nvme.so.14.0 00:03:00.814 SYMLINK libspdk_nvme.so 00:03:01.384 LIB libspdk_blob.a 00:03:01.384 SO libspdk_blob.so.11.0 00:03:01.644 SYMLINK libspdk_blob.so 00:03:01.905 CC lib/lvol/lvol.o 00:03:01.905 CC lib/blobfs/blobfs.o 00:03:01.905 CC lib/blobfs/tree.o 00:03:02.474 LIB libspdk_bdev.a 00:03:02.474 SO libspdk_bdev.so.17.0 00:03:02.474 SYMLINK libspdk_bdev.so 00:03:02.734 LIB libspdk_blobfs.a 00:03:02.734 SO libspdk_blobfs.so.10.0 00:03:02.734 CC lib/ftl/ftl_core.o 00:03:02.734 CC lib/ftl/ftl_init.o 00:03:02.734 CC lib/nvmf/ctrlr.o 00:03:02.734 CC lib/ftl/ftl_layout.o 00:03:02.734 CC lib/ftl/ftl_debug.o 00:03:02.734 CC lib/scsi/dev.o 00:03:02.734 CC lib/ublk/ublk.o 00:03:02.734 CC lib/nbd/nbd.o 00:03:02.734 SYMLINK libspdk_blobfs.so 00:03:02.734 CC lib/nbd/nbd_rpc.o 00:03:02.994 LIB libspdk_lvol.a 00:03:02.994 SO libspdk_lvol.so.10.0 00:03:02.994 SYMLINK libspdk_lvol.so 00:03:02.994 CC lib/scsi/lun.o 00:03:02.994 CC lib/ftl/ftl_io.o 00:03:02.994 CC lib/nvmf/ctrlr_discovery.o 00:03:02.994 CC lib/ublk/ublk_rpc.o 00:03:02.994 CC lib/scsi/port.o 00:03:02.994 CC lib/ftl/ftl_sb.o 00:03:03.254 CC lib/nvmf/ctrlr_bdev.o 00:03:03.254 CC lib/nvmf/subsystem.o 00:03:03.254 CC lib/nvmf/nvmf.o 00:03:03.254 LIB libspdk_nbd.a 00:03:03.254 CC lib/scsi/scsi.o 00:03:03.254 CC lib/scsi/scsi_bdev.o 00:03:03.254 CC lib/ftl/ftl_l2p.o 00:03:03.254 SO libspdk_nbd.so.7.0 00:03:03.254 SYMLINK libspdk_nbd.so 00:03:03.254 CC lib/ftl/ftl_l2p_flat.o 00:03:03.254 CC lib/ftl/ftl_nv_cache.o 00:03:03.515 CC lib/nvmf/nvmf_rpc.o 00:03:03.515 LIB libspdk_ublk.a 00:03:03.515 SO libspdk_ublk.so.3.0 00:03:03.515 CC lib/scsi/scsi_pr.o 00:03:03.515 CC lib/scsi/scsi_rpc.o 00:03:03.515 SYMLINK libspdk_ublk.so 00:03:03.515 CC lib/nvmf/transport.o 00:03:03.774 CC lib/nvmf/tcp.o 00:03:03.774 CC lib/nvmf/stubs.o 00:03:03.774 CC lib/scsi/task.o 00:03:03.774 CC lib/nvmf/mdns_server.o 00:03:04.034 LIB libspdk_scsi.a 00:03:04.034 CC lib/nvmf/rdma.o 00:03:04.034 SO libspdk_scsi.so.9.0 00:03:04.034 SYMLINK libspdk_scsi.so 00:03:04.034 CC lib/nvmf/auth.o 00:03:04.294 CC lib/ftl/ftl_band.o 00:03:04.294 CC lib/ftl/ftl_band_ops.o 00:03:04.294 CC lib/ftl/ftl_writer.o 00:03:04.294 CC lib/ftl/ftl_rq.o 00:03:04.580 CC lib/vhost/vhost.o 00:03:04.580 CC lib/iscsi/conn.o 00:03:04.580 CC lib/vhost/vhost_rpc.o 00:03:04.580 CC lib/vhost/vhost_scsi.o 00:03:04.580 CC lib/vhost/vhost_blk.o 00:03:04.580 CC lib/ftl/ftl_reloc.o 00:03:04.850 CC lib/vhost/rte_vhost_user.o 00:03:05.109 CC lib/iscsi/init_grp.o 00:03:05.109 CC lib/ftl/ftl_l2p_cache.o 00:03:05.109 CC lib/ftl/ftl_p2l.o 00:03:05.109 CC lib/iscsi/iscsi.o 00:03:05.109 CC lib/ftl/ftl_p2l_log.o 00:03:05.501 CC lib/iscsi/param.o 00:03:05.501 CC lib/iscsi/portal_grp.o 00:03:05.501 CC lib/ftl/mngt/ftl_mngt.o 00:03:05.501 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:05.501 CC lib/iscsi/tgt_node.o 00:03:05.501 CC lib/iscsi/iscsi_subsystem.o 00:03:05.760 CC lib/iscsi/iscsi_rpc.o 00:03:05.760 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:05.760 CC lib/iscsi/task.o 00:03:05.760 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:05.760 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:05.760 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:05.760 LIB libspdk_vhost.a 00:03:05.760 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:06.019 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:06.019 SO libspdk_vhost.so.8.0 00:03:06.019 SYMLINK libspdk_vhost.so 00:03:06.019 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:06.019 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:06.019 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:06.019 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:06.019 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:06.019 CC lib/ftl/utils/ftl_conf.o 00:03:06.019 CC lib/ftl/utils/ftl_md.o 00:03:06.019 CC lib/ftl/utils/ftl_mempool.o 00:03:06.279 CC lib/ftl/utils/ftl_bitmap.o 00:03:06.279 CC lib/ftl/utils/ftl_property.o 00:03:06.279 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:06.279 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:06.279 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:06.279 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:06.538 LIB libspdk_nvmf.a 00:03:06.538 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:06.538 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:06.539 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:06.539 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:06.539 SO libspdk_nvmf.so.19.0 00:03:06.539 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:06.539 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:06.539 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:06.539 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:06.539 LIB libspdk_iscsi.a 00:03:06.539 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:06.799 CC lib/ftl/base/ftl_base_dev.o 00:03:06.799 SO libspdk_iscsi.so.8.0 00:03:06.799 CC lib/ftl/base/ftl_base_bdev.o 00:03:06.799 CC lib/ftl/ftl_trace.o 00:03:06.799 SYMLINK libspdk_nvmf.so 00:03:06.799 SYMLINK libspdk_iscsi.so 00:03:07.059 LIB libspdk_ftl.a 00:03:07.319 SO libspdk_ftl.so.9.0 00:03:07.578 SYMLINK libspdk_ftl.so 00:03:07.838 CC module/env_dpdk/env_dpdk_rpc.o 00:03:07.838 CC module/fsdev/aio/fsdev_aio.o 00:03:07.838 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:07.838 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:07.838 CC module/accel/error/accel_error.o 00:03:07.838 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.097 CC module/sock/posix/posix.o 00:03:08.097 CC module/blob/bdev/blob_bdev.o 00:03:08.097 CC module/accel/ioat/accel_ioat.o 00:03:08.097 CC module/keyring/file/keyring.o 00:03:08.097 LIB libspdk_env_dpdk_rpc.a 00:03:08.097 SO libspdk_env_dpdk_rpc.so.6.0 00:03:08.097 SYMLINK libspdk_env_dpdk_rpc.so 00:03:08.097 CC module/accel/error/accel_error_rpc.o 00:03:08.097 LIB libspdk_scheduler_gscheduler.a 00:03:08.097 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.097 CC module/keyring/file/keyring_rpc.o 00:03:08.097 SO libspdk_scheduler_gscheduler.so.4.0 00:03:08.097 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:08.097 CC module/accel/ioat/accel_ioat_rpc.o 00:03:08.097 LIB libspdk_scheduler_dynamic.a 00:03:08.097 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:08.097 SO libspdk_scheduler_dynamic.so.4.0 00:03:08.097 SYMLINK libspdk_scheduler_gscheduler.so 00:03:08.097 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:08.097 SYMLINK libspdk_scheduler_dynamic.so 00:03:08.097 LIB libspdk_accel_error.a 00:03:08.355 LIB libspdk_keyring_file.a 00:03:08.355 LIB libspdk_blob_bdev.a 00:03:08.355 SO libspdk_accel_error.so.2.0 00:03:08.355 LIB libspdk_accel_ioat.a 00:03:08.355 SO libspdk_keyring_file.so.2.0 00:03:08.355 SO libspdk_blob_bdev.so.11.0 00:03:08.355 SO libspdk_accel_ioat.so.6.0 00:03:08.355 SYMLINK libspdk_accel_error.so 00:03:08.355 CC module/fsdev/aio/linux_aio_mgr.o 00:03:08.355 SYMLINK libspdk_keyring_file.so 00:03:08.355 SYMLINK libspdk_blob_bdev.so 00:03:08.355 CC module/accel/dsa/accel_dsa.o 00:03:08.355 CC module/accel/dsa/accel_dsa_rpc.o 00:03:08.355 CC module/keyring/linux/keyring.o 00:03:08.355 SYMLINK libspdk_accel_ioat.so 00:03:08.355 CC module/keyring/linux/keyring_rpc.o 00:03:08.355 CC module/accel/iaa/accel_iaa.o 00:03:08.355 LIB libspdk_keyring_linux.a 00:03:08.615 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.615 SO libspdk_keyring_linux.so.1.0 00:03:08.615 CC module/bdev/delay/vbdev_delay.o 00:03:08.615 CC module/blobfs/bdev/blobfs_bdev.o 00:03:08.615 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:08.615 SYMLINK libspdk_keyring_linux.so 00:03:08.615 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:08.615 CC module/bdev/error/vbdev_error.o 00:03:08.615 LIB libspdk_accel_dsa.a 00:03:08.615 LIB libspdk_fsdev_aio.a 00:03:08.615 CC module/bdev/gpt/gpt.o 00:03:08.615 LIB libspdk_accel_iaa.a 00:03:08.615 SO libspdk_accel_dsa.so.5.0 00:03:08.615 SO libspdk_accel_iaa.so.3.0 00:03:08.615 SO libspdk_fsdev_aio.so.1.0 00:03:08.615 LIB libspdk_sock_posix.a 00:03:08.615 CC module/bdev/error/vbdev_error_rpc.o 00:03:08.615 SYMLINK libspdk_accel_iaa.so 00:03:08.615 LIB libspdk_blobfs_bdev.a 00:03:08.615 SYMLINK libspdk_fsdev_aio.so 00:03:08.873 SO libspdk_sock_posix.so.6.0 00:03:08.873 SYMLINK libspdk_accel_dsa.so 00:03:08.873 SO libspdk_blobfs_bdev.so.6.0 00:03:08.873 CC module/bdev/gpt/vbdev_gpt.o 00:03:08.873 SYMLINK libspdk_sock_posix.so 00:03:08.873 SYMLINK libspdk_blobfs_bdev.so 00:03:08.873 LIB libspdk_bdev_error.a 00:03:08.873 LIB libspdk_bdev_delay.a 00:03:08.873 CC module/bdev/malloc/bdev_malloc.o 00:03:08.873 CC module/bdev/lvol/vbdev_lvol.o 00:03:08.873 SO libspdk_bdev_error.so.6.0 00:03:08.873 SO libspdk_bdev_delay.so.6.0 00:03:08.873 CC module/bdev/null/bdev_null.o 00:03:08.873 CC module/bdev/nvme/bdev_nvme.o 00:03:08.873 CC module/bdev/passthru/vbdev_passthru.o 00:03:08.873 CC module/bdev/raid/bdev_raid.o 00:03:08.873 SYMLINK libspdk_bdev_delay.so 00:03:08.873 SYMLINK libspdk_bdev_error.so 00:03:08.873 CC module/bdev/raid/bdev_raid_rpc.o 00:03:08.873 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:09.132 CC module/bdev/split/vbdev_split.o 00:03:09.132 LIB libspdk_bdev_gpt.a 00:03:09.132 SO libspdk_bdev_gpt.so.6.0 00:03:09.132 SYMLINK libspdk_bdev_gpt.so 00:03:09.132 CC module/bdev/split/vbdev_split_rpc.o 00:03:09.132 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:09.132 CC module/bdev/null/bdev_null_rpc.o 00:03:09.391 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:09.391 LIB libspdk_bdev_split.a 00:03:09.391 SO libspdk_bdev_split.so.6.0 00:03:09.391 LIB libspdk_bdev_passthru.a 00:03:09.391 LIB libspdk_bdev_null.a 00:03:09.391 SO libspdk_bdev_passthru.so.6.0 00:03:09.391 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:09.391 SO libspdk_bdev_null.so.6.0 00:03:09.391 SYMLINK libspdk_bdev_split.so 00:03:09.391 CC module/bdev/aio/bdev_aio.o 00:03:09.391 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:09.391 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:09.391 LIB libspdk_bdev_malloc.a 00:03:09.391 SYMLINK libspdk_bdev_null.so 00:03:09.391 SYMLINK libspdk_bdev_passthru.so 00:03:09.391 CC module/bdev/nvme/nvme_rpc.o 00:03:09.391 CC module/bdev/nvme/bdev_mdns_client.o 00:03:09.391 SO libspdk_bdev_malloc.so.6.0 00:03:09.650 SYMLINK libspdk_bdev_malloc.so 00:03:09.650 CC module/bdev/raid/bdev_raid_sb.o 00:03:09.650 CC module/bdev/nvme/vbdev_opal.o 00:03:09.650 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:09.650 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:09.650 CC module/bdev/raid/raid0.o 00:03:09.650 CC module/bdev/aio/bdev_aio_rpc.o 00:03:09.650 LIB libspdk_bdev_zone_block.a 00:03:09.650 LIB libspdk_bdev_lvol.a 00:03:09.650 SO libspdk_bdev_zone_block.so.6.0 00:03:09.908 CC module/bdev/raid/raid1.o 00:03:09.908 CC module/bdev/raid/concat.o 00:03:09.908 SO libspdk_bdev_lvol.so.6.0 00:03:09.908 SYMLINK libspdk_bdev_zone_block.so 00:03:09.908 SYMLINK libspdk_bdev_lvol.so 00:03:09.908 CC module/bdev/raid/raid5f.o 00:03:09.908 LIB libspdk_bdev_aio.a 00:03:09.908 SO libspdk_bdev_aio.so.6.0 00:03:09.908 CC module/bdev/ftl/bdev_ftl.o 00:03:09.908 SYMLINK libspdk_bdev_aio.so 00:03:09.908 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:10.167 CC module/bdev/iscsi/bdev_iscsi.o 00:03:10.167 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:10.167 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:10.167 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:10.167 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:10.425 LIB libspdk_bdev_ftl.a 00:03:10.425 SO libspdk_bdev_ftl.so.6.0 00:03:10.425 LIB libspdk_bdev_iscsi.a 00:03:10.425 SYMLINK libspdk_bdev_ftl.so 00:03:10.425 SO libspdk_bdev_iscsi.so.6.0 00:03:10.425 LIB libspdk_bdev_raid.a 00:03:10.683 SYMLINK libspdk_bdev_iscsi.so 00:03:10.683 LIB libspdk_bdev_virtio.a 00:03:10.683 SO libspdk_bdev_raid.so.6.0 00:03:10.683 SO libspdk_bdev_virtio.so.6.0 00:03:10.683 SYMLINK libspdk_bdev_raid.so 00:03:10.683 SYMLINK libspdk_bdev_virtio.so 00:03:11.252 LIB libspdk_bdev_nvme.a 00:03:11.512 SO libspdk_bdev_nvme.so.7.0 00:03:11.512 SYMLINK libspdk_bdev_nvme.so 00:03:12.083 CC module/event/subsystems/keyring/keyring.o 00:03:12.083 CC module/event/subsystems/iobuf/iobuf.o 00:03:12.083 CC module/event/subsystems/vmd/vmd.o 00:03:12.083 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:12.083 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:12.083 CC module/event/subsystems/sock/sock.o 00:03:12.083 CC module/event/subsystems/scheduler/scheduler.o 00:03:12.083 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:12.083 CC module/event/subsystems/fsdev/fsdev.o 00:03:12.348 LIB libspdk_event_keyring.a 00:03:12.348 LIB libspdk_event_vhost_blk.a 00:03:12.348 LIB libspdk_event_sock.a 00:03:12.348 LIB libspdk_event_scheduler.a 00:03:12.348 LIB libspdk_event_fsdev.a 00:03:12.348 LIB libspdk_event_vmd.a 00:03:12.348 LIB libspdk_event_iobuf.a 00:03:12.348 SO libspdk_event_keyring.so.1.0 00:03:12.348 SO libspdk_event_vhost_blk.so.3.0 00:03:12.348 SO libspdk_event_sock.so.5.0 00:03:12.348 SO libspdk_event_fsdev.so.1.0 00:03:12.348 SO libspdk_event_scheduler.so.4.0 00:03:12.348 SO libspdk_event_iobuf.so.3.0 00:03:12.348 SO libspdk_event_vmd.so.6.0 00:03:12.348 SYMLINK libspdk_event_keyring.so 00:03:12.348 SYMLINK libspdk_event_vhost_blk.so 00:03:12.348 SYMLINK libspdk_event_fsdev.so 00:03:12.348 SYMLINK libspdk_event_sock.so 00:03:12.348 SYMLINK libspdk_event_scheduler.so 00:03:12.348 SYMLINK libspdk_event_iobuf.so 00:03:12.348 SYMLINK libspdk_event_vmd.so 00:03:12.608 CC module/event/subsystems/accel/accel.o 00:03:12.868 LIB libspdk_event_accel.a 00:03:12.868 SO libspdk_event_accel.so.6.0 00:03:12.868 SYMLINK libspdk_event_accel.so 00:03:13.438 CC module/event/subsystems/bdev/bdev.o 00:03:13.698 LIB libspdk_event_bdev.a 00:03:13.698 SO libspdk_event_bdev.so.6.0 00:03:13.698 SYMLINK libspdk_event_bdev.so 00:03:13.957 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:13.957 CC module/event/subsystems/ublk/ublk.o 00:03:13.957 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:13.957 CC module/event/subsystems/scsi/scsi.o 00:03:13.957 CC module/event/subsystems/nbd/nbd.o 00:03:14.216 LIB libspdk_event_ublk.a 00:03:14.216 LIB libspdk_event_nbd.a 00:03:14.216 LIB libspdk_event_scsi.a 00:03:14.216 SO libspdk_event_nbd.so.6.0 00:03:14.216 SO libspdk_event_ublk.so.3.0 00:03:14.216 SO libspdk_event_scsi.so.6.0 00:03:14.216 LIB libspdk_event_nvmf.a 00:03:14.216 SYMLINK libspdk_event_ublk.so 00:03:14.216 SYMLINK libspdk_event_nbd.so 00:03:14.216 SYMLINK libspdk_event_scsi.so 00:03:14.216 SO libspdk_event_nvmf.so.6.0 00:03:14.476 SYMLINK libspdk_event_nvmf.so 00:03:14.736 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:14.736 CC module/event/subsystems/iscsi/iscsi.o 00:03:14.736 LIB libspdk_event_vhost_scsi.a 00:03:14.996 SO libspdk_event_vhost_scsi.so.3.0 00:03:14.996 LIB libspdk_event_iscsi.a 00:03:14.996 SO libspdk_event_iscsi.so.6.0 00:03:14.996 SYMLINK libspdk_event_vhost_scsi.so 00:03:14.996 SYMLINK libspdk_event_iscsi.so 00:03:15.257 SO libspdk.so.6.0 00:03:15.257 SYMLINK libspdk.so 00:03:15.515 CC app/trace_record/trace_record.o 00:03:15.515 CC app/spdk_lspci/spdk_lspci.o 00:03:15.515 CC app/spdk_nvme_perf/perf.o 00:03:15.515 CXX app/trace/trace.o 00:03:15.515 CC app/nvmf_tgt/nvmf_main.o 00:03:15.515 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.515 CC app/spdk_tgt/spdk_tgt.o 00:03:15.515 CC examples/ioat/perf/perf.o 00:03:15.515 CC test/thread/poller_perf/poller_perf.o 00:03:15.515 CC examples/util/zipf/zipf.o 00:03:15.774 LINK spdk_lspci 00:03:15.774 LINK nvmf_tgt 00:03:15.774 LINK poller_perf 00:03:15.774 LINK iscsi_tgt 00:03:15.774 LINK spdk_tgt 00:03:15.774 LINK spdk_trace_record 00:03:15.774 LINK zipf 00:03:15.774 LINK ioat_perf 00:03:15.774 LINK spdk_trace 00:03:16.033 CC examples/ioat/verify/verify.o 00:03:16.033 CC app/spdk_nvme_identify/identify.o 00:03:16.033 CC app/spdk_nvme_discover/discovery_aer.o 00:03:16.033 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:16.033 TEST_HEADER include/spdk/accel.h 00:03:16.033 TEST_HEADER include/spdk/accel_module.h 00:03:16.033 TEST_HEADER include/spdk/assert.h 00:03:16.033 TEST_HEADER include/spdk/barrier.h 00:03:16.033 TEST_HEADER include/spdk/base64.h 00:03:16.033 TEST_HEADER include/spdk/bdev.h 00:03:16.033 TEST_HEADER include/spdk/bdev_module.h 00:03:16.033 TEST_HEADER include/spdk/bdev_zone.h 00:03:16.033 TEST_HEADER include/spdk/bit_array.h 00:03:16.033 TEST_HEADER include/spdk/bit_pool.h 00:03:16.033 TEST_HEADER include/spdk/blob_bdev.h 00:03:16.033 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:16.033 TEST_HEADER include/spdk/blobfs.h 00:03:16.033 TEST_HEADER include/spdk/blob.h 00:03:16.033 TEST_HEADER include/spdk/conf.h 00:03:16.033 TEST_HEADER include/spdk/config.h 00:03:16.033 TEST_HEADER include/spdk/cpuset.h 00:03:16.033 TEST_HEADER include/spdk/crc16.h 00:03:16.033 CC test/dma/test_dma/test_dma.o 00:03:16.033 TEST_HEADER include/spdk/crc32.h 00:03:16.033 TEST_HEADER include/spdk/crc64.h 00:03:16.033 TEST_HEADER include/spdk/dif.h 00:03:16.033 TEST_HEADER include/spdk/dma.h 00:03:16.033 LINK verify 00:03:16.033 TEST_HEADER include/spdk/endian.h 00:03:16.033 TEST_HEADER include/spdk/env_dpdk.h 00:03:16.033 TEST_HEADER include/spdk/env.h 00:03:16.033 TEST_HEADER include/spdk/event.h 00:03:16.033 TEST_HEADER include/spdk/fd_group.h 00:03:16.033 TEST_HEADER include/spdk/fd.h 00:03:16.033 TEST_HEADER include/spdk/file.h 00:03:16.033 TEST_HEADER include/spdk/fsdev.h 00:03:16.033 TEST_HEADER include/spdk/fsdev_module.h 00:03:16.033 TEST_HEADER include/spdk/ftl.h 00:03:16.033 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:16.033 TEST_HEADER include/spdk/gpt_spec.h 00:03:16.033 TEST_HEADER include/spdk/hexlify.h 00:03:16.033 TEST_HEADER include/spdk/histogram_data.h 00:03:16.033 TEST_HEADER include/spdk/idxd.h 00:03:16.033 TEST_HEADER include/spdk/idxd_spec.h 00:03:16.033 TEST_HEADER include/spdk/init.h 00:03:16.033 TEST_HEADER include/spdk/ioat.h 00:03:16.033 TEST_HEADER include/spdk/ioat_spec.h 00:03:16.293 TEST_HEADER include/spdk/iscsi_spec.h 00:03:16.293 TEST_HEADER include/spdk/json.h 00:03:16.293 TEST_HEADER include/spdk/jsonrpc.h 00:03:16.293 TEST_HEADER include/spdk/keyring.h 00:03:16.293 TEST_HEADER include/spdk/keyring_module.h 00:03:16.293 TEST_HEADER include/spdk/likely.h 00:03:16.293 TEST_HEADER include/spdk/log.h 00:03:16.293 TEST_HEADER include/spdk/lvol.h 00:03:16.293 TEST_HEADER include/spdk/md5.h 00:03:16.293 TEST_HEADER include/spdk/memory.h 00:03:16.293 TEST_HEADER include/spdk/mmio.h 00:03:16.293 TEST_HEADER include/spdk/nbd.h 00:03:16.293 TEST_HEADER include/spdk/net.h 00:03:16.293 TEST_HEADER include/spdk/notify.h 00:03:16.293 TEST_HEADER include/spdk/nvme.h 00:03:16.293 TEST_HEADER include/spdk/nvme_intel.h 00:03:16.293 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:16.293 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:16.293 TEST_HEADER include/spdk/nvme_spec.h 00:03:16.293 TEST_HEADER include/spdk/nvme_zns.h 00:03:16.293 CC test/app/bdev_svc/bdev_svc.o 00:03:16.293 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:16.293 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:16.293 TEST_HEADER include/spdk/nvmf.h 00:03:16.293 TEST_HEADER include/spdk/nvmf_spec.h 00:03:16.293 TEST_HEADER include/spdk/nvmf_transport.h 00:03:16.293 TEST_HEADER include/spdk/opal.h 00:03:16.293 TEST_HEADER include/spdk/opal_spec.h 00:03:16.293 CC examples/thread/thread/thread_ex.o 00:03:16.293 TEST_HEADER include/spdk/pci_ids.h 00:03:16.293 TEST_HEADER include/spdk/pipe.h 00:03:16.293 TEST_HEADER include/spdk/queue.h 00:03:16.293 TEST_HEADER include/spdk/reduce.h 00:03:16.293 TEST_HEADER include/spdk/rpc.h 00:03:16.293 CC examples/sock/hello_world/hello_sock.o 00:03:16.293 TEST_HEADER include/spdk/scheduler.h 00:03:16.293 TEST_HEADER include/spdk/scsi.h 00:03:16.293 TEST_HEADER include/spdk/scsi_spec.h 00:03:16.293 TEST_HEADER include/spdk/sock.h 00:03:16.293 TEST_HEADER include/spdk/stdinc.h 00:03:16.293 TEST_HEADER include/spdk/string.h 00:03:16.293 TEST_HEADER include/spdk/thread.h 00:03:16.293 TEST_HEADER include/spdk/trace.h 00:03:16.293 TEST_HEADER include/spdk/trace_parser.h 00:03:16.293 TEST_HEADER include/spdk/tree.h 00:03:16.293 TEST_HEADER include/spdk/ublk.h 00:03:16.293 TEST_HEADER include/spdk/util.h 00:03:16.293 TEST_HEADER include/spdk/uuid.h 00:03:16.293 TEST_HEADER include/spdk/version.h 00:03:16.293 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:16.293 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:16.293 TEST_HEADER include/spdk/vhost.h 00:03:16.293 TEST_HEADER include/spdk/vmd.h 00:03:16.294 TEST_HEADER include/spdk/xor.h 00:03:16.294 TEST_HEADER include/spdk/zipf.h 00:03:16.294 LINK interrupt_tgt 00:03:16.294 CXX test/cpp_headers/accel.o 00:03:16.294 LINK spdk_nvme_discover 00:03:16.294 CXX test/cpp_headers/accel_module.o 00:03:16.294 LINK bdev_svc 00:03:16.294 CXX test/cpp_headers/assert.o 00:03:16.294 LINK spdk_nvme_perf 00:03:16.554 CXX test/cpp_headers/barrier.o 00:03:16.554 LINK thread 00:03:16.554 LINK hello_sock 00:03:16.554 CC examples/vmd/lsvmd/lsvmd.o 00:03:16.554 CXX test/cpp_headers/base64.o 00:03:16.554 LINK test_dma 00:03:16.554 CC test/env/mem_callbacks/mem_callbacks.o 00:03:16.554 CXX test/cpp_headers/bdev.o 00:03:16.554 LINK lsvmd 00:03:16.554 CC test/rpc_client/rpc_client_test.o 00:03:16.813 CC test/event/event_perf/event_perf.o 00:03:16.813 CC examples/idxd/perf/perf.o 00:03:16.813 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:16.813 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:16.813 CXX test/cpp_headers/bdev_module.o 00:03:16.813 LINK rpc_client_test 00:03:16.813 LINK event_perf 00:03:16.813 CC test/app/histogram_perf/histogram_perf.o 00:03:16.813 LINK spdk_nvme_identify 00:03:16.813 CC examples/vmd/led/led.o 00:03:17.073 CXX test/cpp_headers/bdev_zone.o 00:03:17.073 LINK histogram_perf 00:03:17.073 LINK led 00:03:17.073 CC test/event/reactor/reactor.o 00:03:17.073 LINK idxd_perf 00:03:17.073 CC test/event/reactor_perf/reactor_perf.o 00:03:17.073 LINK mem_callbacks 00:03:17.073 LINK nvme_fuzz 00:03:17.073 CC app/spdk_top/spdk_top.o 00:03:17.073 CXX test/cpp_headers/bit_array.o 00:03:17.073 LINK reactor 00:03:17.332 LINK reactor_perf 00:03:17.332 CC test/event/app_repeat/app_repeat.o 00:03:17.332 CC test/env/vtophys/vtophys.o 00:03:17.332 CC test/event/scheduler/scheduler.o 00:03:17.332 CC examples/nvme/hello_world/hello_world.o 00:03:17.332 CXX test/cpp_headers/bit_pool.o 00:03:17.332 CC test/app/jsoncat/jsoncat.o 00:03:17.332 LINK app_repeat 00:03:17.591 LINK vtophys 00:03:17.591 CXX test/cpp_headers/blob_bdev.o 00:03:17.591 CC test/app/stub/stub.o 00:03:17.591 CC examples/nvme/reconnect/reconnect.o 00:03:17.591 LINK jsoncat 00:03:17.591 LINK scheduler 00:03:17.591 LINK hello_world 00:03:17.591 CXX test/cpp_headers/blobfs_bdev.o 00:03:17.591 LINK stub 00:03:17.851 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:17.851 CC test/env/memory/memory_ut.o 00:03:17.851 CXX test/cpp_headers/blobfs.o 00:03:17.851 CC app/vhost/vhost.o 00:03:17.851 LINK reconnect 00:03:17.851 CXX test/cpp_headers/blob.o 00:03:17.851 LINK env_dpdk_post_init 00:03:17.851 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:17.851 CC examples/accel/perf/accel_perf.o 00:03:18.110 LINK vhost 00:03:18.110 CXX test/cpp_headers/conf.o 00:03:18.111 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:18.111 CC app/spdk_dd/spdk_dd.o 00:03:18.111 LINK spdk_top 00:03:18.111 CXX test/cpp_headers/config.o 00:03:18.111 CXX test/cpp_headers/cpuset.o 00:03:18.111 LINK hello_fsdev 00:03:18.111 CC app/fio/nvme/fio_plugin.o 00:03:18.377 CC app/fio/bdev/fio_plugin.o 00:03:18.377 CC test/env/pci/pci_ut.o 00:03:18.377 CXX test/cpp_headers/crc16.o 00:03:18.377 CXX test/cpp_headers/crc32.o 00:03:18.377 LINK spdk_dd 00:03:18.377 LINK accel_perf 00:03:18.679 CXX test/cpp_headers/crc64.o 00:03:18.679 CXX test/cpp_headers/dif.o 00:03:18.679 LINK nvme_manage 00:03:18.679 CC test/accel/dif/dif.o 00:03:18.679 LINK iscsi_fuzz 00:03:18.679 CC examples/nvme/arbitration/arbitration.o 00:03:18.940 LINK pci_ut 00:03:18.940 CXX test/cpp_headers/dma.o 00:03:18.940 LINK spdk_bdev 00:03:18.940 LINK spdk_nvme 00:03:18.940 CXX test/cpp_headers/endian.o 00:03:18.940 CXX test/cpp_headers/env_dpdk.o 00:03:18.940 CC test/blobfs/mkfs/mkfs.o 00:03:18.940 LINK memory_ut 00:03:18.940 CXX test/cpp_headers/env.o 00:03:18.940 CXX test/cpp_headers/event.o 00:03:19.198 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:19.198 LINK arbitration 00:03:19.198 LINK mkfs 00:03:19.198 CXX test/cpp_headers/fd_group.o 00:03:19.198 CC examples/blob/hello_world/hello_blob.o 00:03:19.198 CC examples/blob/cli/blobcli.o 00:03:19.198 CC test/lvol/esnap/esnap.o 00:03:19.198 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:19.198 CXX test/cpp_headers/fd.o 00:03:19.198 CXX test/cpp_headers/file.o 00:03:19.456 CC examples/bdev/hello_world/hello_bdev.o 00:03:19.456 CC examples/nvme/hotplug/hotplug.o 00:03:19.456 LINK hello_blob 00:03:19.456 CC test/nvme/aer/aer.o 00:03:19.456 LINK dif 00:03:19.456 CXX test/cpp_headers/fsdev.o 00:03:19.456 CC examples/bdev/bdevperf/bdevperf.o 00:03:19.456 LINK hello_bdev 00:03:19.715 CXX test/cpp_headers/fsdev_module.o 00:03:19.715 LINK hotplug 00:03:19.715 LINK vhost_fuzz 00:03:19.715 LINK blobcli 00:03:19.715 CC test/nvme/reset/reset.o 00:03:19.715 CC test/nvme/sgl/sgl.o 00:03:19.715 LINK aer 00:03:19.715 CXX test/cpp_headers/ftl.o 00:03:19.974 CC examples/nvme/abort/abort.o 00:03:19.974 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:19.974 CC test/nvme/e2edp/nvme_dp.o 00:03:19.974 CXX test/cpp_headers/fuse_dispatcher.o 00:03:19.974 LINK reset 00:03:19.974 CC test/bdev/bdevio/bdevio.o 00:03:19.974 LINK sgl 00:03:19.974 CC test/nvme/overhead/overhead.o 00:03:19.974 LINK cmb_copy 00:03:20.232 CXX test/cpp_headers/gpt_spec.o 00:03:20.232 CXX test/cpp_headers/hexlify.o 00:03:20.232 CC test/nvme/err_injection/err_injection.o 00:03:20.232 CC test/nvme/startup/startup.o 00:03:20.232 LINK nvme_dp 00:03:20.232 LINK abort 00:03:20.232 CXX test/cpp_headers/histogram_data.o 00:03:20.232 LINK overhead 00:03:20.232 LINK bdevperf 00:03:20.490 LINK bdevio 00:03:20.490 LINK startup 00:03:20.490 LINK err_injection 00:03:20.490 CXX test/cpp_headers/idxd.o 00:03:20.490 CC test/nvme/reserve/reserve.o 00:03:20.490 CC test/nvme/simple_copy/simple_copy.o 00:03:20.490 CXX test/cpp_headers/idxd_spec.o 00:03:20.490 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.490 CXX test/cpp_headers/init.o 00:03:20.748 CC test/nvme/connect_stress/connect_stress.o 00:03:20.748 CC test/nvme/boot_partition/boot_partition.o 00:03:20.748 LINK reserve 00:03:20.748 CXX test/cpp_headers/ioat.o 00:03:20.748 CC test/nvme/compliance/nvme_compliance.o 00:03:20.748 CC test/nvme/fused_ordering/fused_ordering.o 00:03:20.748 LINK pmr_persistence 00:03:20.748 LINK simple_copy 00:03:20.748 CXX test/cpp_headers/ioat_spec.o 00:03:20.748 LINK boot_partition 00:03:20.748 LINK connect_stress 00:03:21.006 LINK fused_ordering 00:03:21.006 CXX test/cpp_headers/iscsi_spec.o 00:03:21.006 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:21.006 CC test/nvme/fdp/fdp.o 00:03:21.006 CC test/nvme/cuse/cuse.o 00:03:21.006 CXX test/cpp_headers/json.o 00:03:21.006 CXX test/cpp_headers/jsonrpc.o 00:03:21.006 LINK nvme_compliance 00:03:21.006 CXX test/cpp_headers/keyring.o 00:03:21.006 CC examples/nvmf/nvmf/nvmf.o 00:03:21.006 CXX test/cpp_headers/keyring_module.o 00:03:21.006 CXX test/cpp_headers/likely.o 00:03:21.006 LINK doorbell_aers 00:03:21.006 CXX test/cpp_headers/log.o 00:03:21.265 CXX test/cpp_headers/lvol.o 00:03:21.265 CXX test/cpp_headers/md5.o 00:03:21.265 CXX test/cpp_headers/memory.o 00:03:21.265 CXX test/cpp_headers/mmio.o 00:03:21.265 CXX test/cpp_headers/nbd.o 00:03:21.265 CXX test/cpp_headers/net.o 00:03:21.265 LINK fdp 00:03:21.265 CXX test/cpp_headers/notify.o 00:03:21.265 CXX test/cpp_headers/nvme.o 00:03:21.265 LINK nvmf 00:03:21.265 CXX test/cpp_headers/nvme_intel.o 00:03:21.522 CXX test/cpp_headers/nvme_ocssd.o 00:03:21.522 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:21.522 CXX test/cpp_headers/nvme_spec.o 00:03:21.522 CXX test/cpp_headers/nvme_zns.o 00:03:21.522 CXX test/cpp_headers/nvmf_cmd.o 00:03:21.522 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:21.522 CXX test/cpp_headers/nvmf.o 00:03:21.522 CXX test/cpp_headers/nvmf_spec.o 00:03:21.522 CXX test/cpp_headers/nvmf_transport.o 00:03:21.522 CXX test/cpp_headers/opal.o 00:03:21.522 CXX test/cpp_headers/opal_spec.o 00:03:21.522 CXX test/cpp_headers/pci_ids.o 00:03:21.780 CXX test/cpp_headers/pipe.o 00:03:21.780 CXX test/cpp_headers/queue.o 00:03:21.780 CXX test/cpp_headers/reduce.o 00:03:21.780 CXX test/cpp_headers/rpc.o 00:03:21.780 CXX test/cpp_headers/scheduler.o 00:03:21.780 CXX test/cpp_headers/scsi.o 00:03:21.780 CXX test/cpp_headers/scsi_spec.o 00:03:21.780 CXX test/cpp_headers/sock.o 00:03:21.780 CXX test/cpp_headers/stdinc.o 00:03:21.780 CXX test/cpp_headers/string.o 00:03:21.780 CXX test/cpp_headers/thread.o 00:03:21.780 CXX test/cpp_headers/trace.o 00:03:21.780 CXX test/cpp_headers/trace_parser.o 00:03:22.038 CXX test/cpp_headers/tree.o 00:03:22.038 CXX test/cpp_headers/ublk.o 00:03:22.038 CXX test/cpp_headers/util.o 00:03:22.038 CXX test/cpp_headers/uuid.o 00:03:22.038 CXX test/cpp_headers/version.o 00:03:22.038 CXX test/cpp_headers/vfio_user_pci.o 00:03:22.038 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.038 CXX test/cpp_headers/vhost.o 00:03:22.038 CXX test/cpp_headers/vmd.o 00:03:22.038 CXX test/cpp_headers/xor.o 00:03:22.038 CXX test/cpp_headers/zipf.o 00:03:22.296 LINK cuse 00:03:24.837 LINK esnap 00:03:25.097 00:03:25.097 real 1m22.625s 00:03:25.097 user 7m4.803s 00:03:25.097 sys 1m35.989s 00:03:25.097 08:40:01 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:25.097 08:40:01 make -- common/autotest_common.sh@10 -- $ set +x 00:03:25.097 ************************************ 00:03:25.097 END TEST make 00:03:25.097 ************************************ 00:03:25.097 08:40:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:25.097 08:40:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:25.097 08:40:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:25.097 08:40:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.097 08:40:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:25.097 08:40:01 -- pm/common@44 -- $ pid=5460 00:03:25.097 08:40:01 -- pm/common@50 -- $ kill -TERM 5460 00:03:25.097 08:40:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.097 08:40:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:25.097 08:40:01 -- pm/common@44 -- $ pid=5462 00:03:25.097 08:40:01 -- pm/common@50 -- $ kill -TERM 5462 00:03:25.097 08:40:01 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:25.097 08:40:01 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:25.097 08:40:01 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:25.392 08:40:01 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:25.392 08:40:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:25.392 08:40:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:25.392 08:40:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:25.392 08:40:01 -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.393 08:40:01 -- scripts/common.sh@336 -- # read -ra ver1 00:03:25.393 08:40:01 -- scripts/common.sh@337 -- # IFS=.-: 00:03:25.393 08:40:01 -- scripts/common.sh@337 -- # read -ra ver2 00:03:25.393 08:40:01 -- scripts/common.sh@338 -- # local 'op=<' 00:03:25.393 08:40:01 -- scripts/common.sh@340 -- # ver1_l=2 00:03:25.393 08:40:01 -- scripts/common.sh@341 -- # ver2_l=1 00:03:25.393 08:40:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:25.393 08:40:01 -- scripts/common.sh@344 -- # case "$op" in 00:03:25.393 08:40:01 -- scripts/common.sh@345 -- # : 1 00:03:25.393 08:40:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:25.393 08:40:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.393 08:40:01 -- scripts/common.sh@365 -- # decimal 1 00:03:25.393 08:40:01 -- scripts/common.sh@353 -- # local d=1 00:03:25.393 08:40:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.393 08:40:01 -- scripts/common.sh@355 -- # echo 1 00:03:25.393 08:40:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:25.393 08:40:01 -- scripts/common.sh@366 -- # decimal 2 00:03:25.393 08:40:01 -- scripts/common.sh@353 -- # local d=2 00:03:25.393 08:40:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.393 08:40:01 -- scripts/common.sh@355 -- # echo 2 00:03:25.393 08:40:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:25.393 08:40:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:25.393 08:40:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:25.393 08:40:01 -- scripts/common.sh@368 -- # return 0 00:03:25.393 08:40:01 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.393 08:40:01 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:25.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.393 --rc genhtml_branch_coverage=1 00:03:25.393 --rc genhtml_function_coverage=1 00:03:25.393 --rc genhtml_legend=1 00:03:25.393 --rc geninfo_all_blocks=1 00:03:25.393 --rc geninfo_unexecuted_blocks=1 00:03:25.393 00:03:25.393 ' 00:03:25.393 08:40:01 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:25.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.393 --rc genhtml_branch_coverage=1 00:03:25.393 --rc genhtml_function_coverage=1 00:03:25.393 --rc genhtml_legend=1 00:03:25.393 --rc geninfo_all_blocks=1 00:03:25.393 --rc geninfo_unexecuted_blocks=1 00:03:25.393 00:03:25.393 ' 00:03:25.393 08:40:01 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:25.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.393 --rc genhtml_branch_coverage=1 00:03:25.393 --rc genhtml_function_coverage=1 00:03:25.393 --rc genhtml_legend=1 00:03:25.393 --rc geninfo_all_blocks=1 00:03:25.393 --rc geninfo_unexecuted_blocks=1 00:03:25.393 00:03:25.393 ' 00:03:25.393 08:40:01 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:25.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.393 --rc genhtml_branch_coverage=1 00:03:25.393 --rc genhtml_function_coverage=1 00:03:25.393 --rc genhtml_legend=1 00:03:25.393 --rc geninfo_all_blocks=1 00:03:25.393 --rc geninfo_unexecuted_blocks=1 00:03:25.393 00:03:25.393 ' 00:03:25.393 08:40:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:25.393 08:40:01 -- nvmf/common.sh@7 -- # uname -s 00:03:25.393 08:40:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.393 08:40:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.393 08:40:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.393 08:40:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.393 08:40:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.393 08:40:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.393 08:40:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.393 08:40:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.393 08:40:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.393 08:40:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.393 08:40:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:45fb7a37-c69d-4288-ba7a-a90b847fc105 00:03:25.393 08:40:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=45fb7a37-c69d-4288-ba7a-a90b847fc105 00:03:25.393 08:40:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.393 08:40:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.393 08:40:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:25.393 08:40:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:25.393 08:40:01 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:25.393 08:40:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:25.393 08:40:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.393 08:40:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.393 08:40:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.393 08:40:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.393 08:40:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.393 08:40:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.393 08:40:01 -- paths/export.sh@5 -- # export PATH 00:03:25.393 08:40:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.393 08:40:01 -- nvmf/common.sh@51 -- # : 0 00:03:25.393 08:40:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:25.393 08:40:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:25.393 08:40:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:25.393 08:40:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.393 08:40:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.393 08:40:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:25.393 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:25.393 08:40:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:25.393 08:40:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:25.393 08:40:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:25.393 08:40:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.393 08:40:01 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.393 08:40:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.393 08:40:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.393 08:40:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.393 08:40:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.393 08:40:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.393 08:40:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.393 08:40:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.393 08:40:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.393 08:40:01 -- spdk/autotest.sh@48 -- # udevadm_pid=54394 00:03:25.393 08:40:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.393 08:40:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:25.393 08:40:01 -- pm/common@17 -- # local monitor 00:03:25.393 08:40:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.393 08:40:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:25.393 08:40:01 -- pm/common@25 -- # sleep 1 00:03:25.393 08:40:01 -- pm/common@21 -- # date +%s 00:03:25.393 08:40:01 -- pm/common@21 -- # date +%s 00:03:25.393 08:40:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728117601 00:03:25.393 08:40:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728117601 00:03:25.393 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728117601_collect-cpu-load.pm.log 00:03:25.393 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728117601_collect-vmstat.pm.log 00:03:26.342 08:40:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:26.342 08:40:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:26.342 08:40:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:26.342 08:40:02 -- common/autotest_common.sh@10 -- # set +x 00:03:26.342 08:40:02 -- spdk/autotest.sh@59 -- # create_test_list 00:03:26.342 08:40:02 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:26.342 08:40:02 -- common/autotest_common.sh@10 -- # set +x 00:03:26.603 08:40:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:26.603 08:40:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:26.603 08:40:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:26.603 08:40:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:26.603 08:40:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:26.603 08:40:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:26.603 08:40:02 -- common/autotest_common.sh@1455 -- # uname 00:03:26.603 08:40:02 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:26.603 08:40:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:26.603 08:40:02 -- common/autotest_common.sh@1475 -- # uname 00:03:26.603 08:40:02 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:26.603 08:40:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:26.603 08:40:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:26.603 lcov: LCOV version 1.15 00:03:26.603 08:40:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:41.498 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:41.498 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:53.730 08:40:30 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:53.730 08:40:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:53.730 08:40:30 -- common/autotest_common.sh@10 -- # set +x 00:03:53.730 08:40:30 -- spdk/autotest.sh@78 -- # rm -f 00:03:53.730 08:40:30 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:54.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.670 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:54.670 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:54.670 08:40:31 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:54.670 08:40:31 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:54.670 08:40:31 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:54.670 08:40:31 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:54.670 08:40:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:54.670 08:40:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:54.670 08:40:31 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:54.670 08:40:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.670 08:40:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:54.670 08:40:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:54.670 08:40:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:54.670 08:40:31 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:54.670 08:40:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:54.670 08:40:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:54.670 08:40:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:54.670 08:40:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:03:54.670 08:40:31 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:03:54.670 08:40:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:54.670 08:40:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:54.670 08:40:31 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:54.670 08:40:31 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:03:54.670 08:40:31 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:03:54.670 08:40:31 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:54.670 08:40:31 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:54.670 08:40:31 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:54.670 08:40:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.670 08:40:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.670 08:40:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:54.670 08:40:31 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:54.670 08:40:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:54.670 No valid GPT data, bailing 00:03:54.670 08:40:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:54.670 08:40:31 -- scripts/common.sh@394 -- # pt= 00:03:54.670 08:40:31 -- scripts/common.sh@395 -- # return 1 00:03:54.670 08:40:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:54.670 1+0 records in 00:03:54.670 1+0 records out 00:03:54.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650347 s, 161 MB/s 00:03:54.670 08:40:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.670 08:40:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.670 08:40:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:54.670 08:40:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:54.670 08:40:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:54.933 No valid GPT data, bailing 00:03:54.933 08:40:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:54.933 08:40:31 -- scripts/common.sh@394 -- # pt= 00:03:54.933 08:40:31 -- scripts/common.sh@395 -- # return 1 00:03:54.933 08:40:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:54.933 1+0 records in 00:03:54.933 1+0 records out 00:03:54.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00679159 s, 154 MB/s 00:03:54.933 08:40:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.933 08:40:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.933 08:40:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:54.933 08:40:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:54.933 08:40:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:54.933 No valid GPT data, bailing 00:03:54.933 08:40:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:54.933 08:40:31 -- scripts/common.sh@394 -- # pt= 00:03:54.933 08:40:31 -- scripts/common.sh@395 -- # return 1 00:03:54.933 08:40:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:54.933 1+0 records in 00:03:54.933 1+0 records out 00:03:54.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456345 s, 230 MB/s 00:03:54.933 08:40:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:54.933 08:40:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:54.933 08:40:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:54.933 08:40:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:54.933 08:40:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:54.933 No valid GPT data, bailing 00:03:54.933 08:40:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:54.933 08:40:31 -- scripts/common.sh@394 -- # pt= 00:03:54.933 08:40:31 -- scripts/common.sh@395 -- # return 1 00:03:54.933 08:40:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:54.933 1+0 records in 00:03:54.933 1+0 records out 00:03:54.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00699373 s, 150 MB/s 00:03:54.933 08:40:31 -- spdk/autotest.sh@105 -- # sync 00:03:54.933 08:40:31 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:54.933 08:40:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:54.933 08:40:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:58.275 08:40:34 -- spdk/autotest.sh@111 -- # uname -s 00:03:58.275 08:40:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:58.275 08:40:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:58.275 08:40:34 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:58.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.845 Hugepages 00:03:58.845 node hugesize free / total 00:03:58.845 node0 1048576kB 0 / 0 00:03:58.845 node0 2048kB 0 / 0 00:03:58.845 00:03:58.845 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.845 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:59.105 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:59.105 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:59.105 08:40:35 -- spdk/autotest.sh@117 -- # uname -s 00:03:59.105 08:40:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:59.105 08:40:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:59.105 08:40:35 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.045 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.305 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.305 08:40:36 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:01.248 08:40:37 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:01.248 08:40:37 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:01.248 08:40:37 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:01.248 08:40:37 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:01.248 08:40:37 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:01.248 08:40:37 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:01.248 08:40:37 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.248 08:40:37 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:01.248 08:40:37 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:01.248 08:40:37 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:01.248 08:40:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:01.248 08:40:37 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.818 Waiting for block devices as requested 00:04:02.087 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.087 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.087 08:40:38 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:02.087 08:40:38 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:02.087 08:40:38 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.087 08:40:38 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:02.087 08:40:38 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.087 08:40:38 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:02.087 08:40:38 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:02.087 08:40:38 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:02.087 08:40:38 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:02.087 08:40:38 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:02.087 08:40:38 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:02.087 08:40:38 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:02.087 08:40:38 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:02.087 08:40:38 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:02.087 08:40:38 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:02.087 08:40:38 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:02.087 08:40:38 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:02.087 08:40:38 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:02.087 08:40:38 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:02.087 08:40:38 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:02.087 08:40:38 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:02.087 08:40:38 -- common/autotest_common.sh@1541 -- # continue 00:04:02.087 08:40:38 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:02.087 08:40:38 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:02.087 08:40:38 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:02.353 08:40:38 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:02.353 08:40:38 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.353 08:40:38 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:02.353 08:40:38 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:02.353 08:40:38 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:02.353 08:40:38 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:02.353 08:40:38 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:02.353 08:40:38 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:02.353 08:40:38 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:02.353 08:40:38 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:02.354 08:40:38 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:02.354 08:40:38 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:02.354 08:40:38 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:02.354 08:40:38 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:02.354 08:40:38 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:02.354 08:40:38 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:02.354 08:40:38 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:02.354 08:40:38 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:02.354 08:40:38 -- common/autotest_common.sh@1541 -- # continue 00:04:02.354 08:40:38 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:02.354 08:40:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:02.354 08:40:38 -- common/autotest_common.sh@10 -- # set +x 00:04:02.354 08:40:38 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:02.354 08:40:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:02.354 08:40:38 -- common/autotest_common.sh@10 -- # set +x 00:04:02.354 08:40:38 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.295 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.295 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.295 08:40:39 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:03.295 08:40:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:03.295 08:40:39 -- common/autotest_common.sh@10 -- # set +x 00:04:03.555 08:40:39 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:03.555 08:40:39 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:03.555 08:40:39 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.555 08:40:39 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:03.555 08:40:39 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:03.555 08:40:39 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:03.555 08:40:39 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:03.555 08:40:39 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:03.555 08:40:39 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:03.555 08:40:39 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:03.555 08:40:39 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.555 08:40:39 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.555 08:40:39 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:03.555 08:40:39 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:03.555 08:40:39 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:03.555 08:40:39 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:03.555 08:40:39 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:03.555 08:40:39 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:03.555 08:40:39 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.555 08:40:39 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:03.555 08:40:39 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:03.555 08:40:39 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:03.555 08:40:39 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:03.555 08:40:39 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:03.555 08:40:39 -- common/autotest_common.sh@1570 -- # return 0 00:04:03.555 08:40:39 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:03.555 08:40:39 -- common/autotest_common.sh@1578 -- # return 0 00:04:03.555 08:40:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:03.555 08:40:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:03.555 08:40:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.555 08:40:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:03.555 08:40:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:03.555 08:40:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.555 08:40:39 -- common/autotest_common.sh@10 -- # set +x 00:04:03.555 08:40:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:03.555 08:40:39 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.555 08:40:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.555 08:40:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.555 08:40:39 -- common/autotest_common.sh@10 -- # set +x 00:04:03.555 ************************************ 00:04:03.555 START TEST env 00:04:03.555 ************************************ 00:04:03.555 08:40:39 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:03.814 * Looking for test storage... 00:04:03.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:03.814 08:40:40 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:03.814 08:40:40 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:03.814 08:40:40 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:03.814 08:40:40 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:03.814 08:40:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.814 08:40:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.814 08:40:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.814 08:40:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.814 08:40:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.814 08:40:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.814 08:40:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.814 08:40:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.814 08:40:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.814 08:40:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.814 08:40:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.814 08:40:40 env -- scripts/common.sh@344 -- # case "$op" in 00:04:03.814 08:40:40 env -- scripts/common.sh@345 -- # : 1 00:04:03.814 08:40:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.814 08:40:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.814 08:40:40 env -- scripts/common.sh@365 -- # decimal 1 00:04:03.814 08:40:40 env -- scripts/common.sh@353 -- # local d=1 00:04:03.814 08:40:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.814 08:40:40 env -- scripts/common.sh@355 -- # echo 1 00:04:03.814 08:40:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.814 08:40:40 env -- scripts/common.sh@366 -- # decimal 2 00:04:03.814 08:40:40 env -- scripts/common.sh@353 -- # local d=2 00:04:03.814 08:40:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.814 08:40:40 env -- scripts/common.sh@355 -- # echo 2 00:04:03.814 08:40:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.814 08:40:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.814 08:40:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.814 08:40:40 env -- scripts/common.sh@368 -- # return 0 00:04:03.814 08:40:40 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.814 08:40:40 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:03.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.814 --rc genhtml_branch_coverage=1 00:04:03.814 --rc genhtml_function_coverage=1 00:04:03.814 --rc genhtml_legend=1 00:04:03.814 --rc geninfo_all_blocks=1 00:04:03.814 --rc geninfo_unexecuted_blocks=1 00:04:03.814 00:04:03.814 ' 00:04:03.814 08:40:40 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:03.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.814 --rc genhtml_branch_coverage=1 00:04:03.814 --rc genhtml_function_coverage=1 00:04:03.814 --rc genhtml_legend=1 00:04:03.814 --rc geninfo_all_blocks=1 00:04:03.814 --rc geninfo_unexecuted_blocks=1 00:04:03.814 00:04:03.814 ' 00:04:03.814 08:40:40 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:03.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.814 --rc genhtml_branch_coverage=1 00:04:03.814 --rc genhtml_function_coverage=1 00:04:03.814 --rc genhtml_legend=1 00:04:03.814 --rc geninfo_all_blocks=1 00:04:03.814 --rc geninfo_unexecuted_blocks=1 00:04:03.815 00:04:03.815 ' 00:04:03.815 08:40:40 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:03.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.815 --rc genhtml_branch_coverage=1 00:04:03.815 --rc genhtml_function_coverage=1 00:04:03.815 --rc genhtml_legend=1 00:04:03.815 --rc geninfo_all_blocks=1 00:04:03.815 --rc geninfo_unexecuted_blocks=1 00:04:03.815 00:04:03.815 ' 00:04:03.815 08:40:40 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:03.815 08:40:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:03.815 08:40:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:03.815 08:40:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:03.815 ************************************ 00:04:03.815 START TEST env_memory 00:04:03.815 ************************************ 00:04:03.815 08:40:40 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:03.815 00:04:03.815 00:04:03.815 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.815 http://cunit.sourceforge.net/ 00:04:03.815 00:04:03.815 00:04:03.815 Suite: memory 00:04:04.075 Test: alloc and free memory map ...[2024-10-05 08:40:40.286107] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:04.075 passed 00:04:04.075 Test: mem map translation ...[2024-10-05 08:40:40.328968] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:04.075 [2024-10-05 08:40:40.329007] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:04.075 [2024-10-05 08:40:40.329066] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:04.075 [2024-10-05 08:40:40.329085] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.075 passed 00:04:04.075 Test: mem map registration ...[2024-10-05 08:40:40.393291] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:04.075 [2024-10-05 08:40:40.393329] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:04.075 passed 00:04:04.075 Test: mem map adjacent registrations ...passed 00:04:04.075 00:04:04.075 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.075 suites 1 1 n/a 0 0 00:04:04.075 tests 4 4 4 0 0 00:04:04.075 asserts 152 152 152 0 n/a 00:04:04.075 00:04:04.075 Elapsed time = 0.240 seconds 00:04:04.075 00:04:04.075 real 0m0.302s 00:04:04.075 user 0m0.255s 00:04:04.075 sys 0m0.035s 00:04:04.075 08:40:40 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:04.075 08:40:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:04.075 ************************************ 00:04:04.075 END TEST env_memory 00:04:04.075 ************************************ 00:04:04.335 08:40:40 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.335 08:40:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.335 08:40:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.335 08:40:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.335 ************************************ 00:04:04.335 START TEST env_vtophys 00:04:04.335 ************************************ 00:04:04.335 08:40:40 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.335 EAL: lib.eal log level changed from notice to debug 00:04:04.335 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.335 EAL: Detected lcore 1 as core 0 on socket 0 00:04:04.335 EAL: Detected lcore 2 as core 0 on socket 0 00:04:04.336 EAL: Detected lcore 3 as core 0 on socket 0 00:04:04.336 EAL: Detected lcore 4 as core 0 on socket 0 00:04:04.336 EAL: Detected lcore 5 as core 0 on socket 0 00:04:04.336 EAL: Detected lcore 6 as core 0 on socket 0 00:04:04.336 EAL: Detected lcore 7 as core 0 on socket 0 00:04:04.336 EAL: Detected lcore 8 as core 0 on socket 0 00:04:04.336 EAL: Detected lcore 9 as core 0 on socket 0 00:04:04.336 EAL: Maximum logical cores by configuration: 128 00:04:04.336 EAL: Detected CPU lcores: 10 00:04:04.336 EAL: Detected NUMA nodes: 1 00:04:04.336 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:04.336 EAL: Detected shared linkage of DPDK 00:04:04.336 EAL: No shared files mode enabled, IPC will be disabled 00:04:04.336 EAL: Selected IOVA mode 'PA' 00:04:04.336 EAL: Probing VFIO support... 00:04:04.336 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.336 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:04.336 EAL: Ask a virtual area of 0x2e000 bytes 00:04:04.336 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:04.336 EAL: Setting up physically contiguous memory... 00:04:04.336 EAL: Setting maximum number of open files to 524288 00:04:04.336 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:04.336 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:04.336 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.336 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:04.336 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.336 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.336 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:04.336 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:04.336 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.336 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:04.336 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.336 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.336 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:04.336 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:04.336 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.336 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:04.336 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.336 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.336 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:04.336 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:04.336 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.336 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:04.336 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.336 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.336 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:04.336 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:04.336 EAL: Hugepages will be freed exactly as allocated. 00:04:04.336 EAL: No shared files mode enabled, IPC is disabled 00:04:04.336 EAL: No shared files mode enabled, IPC is disabled 00:04:04.336 EAL: TSC frequency is ~2290000 KHz 00:04:04.336 EAL: Main lcore 0 is ready (tid=7fe103655a40;cpuset=[0]) 00:04:04.336 EAL: Trying to obtain current memory policy. 00:04:04.336 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.336 EAL: Restoring previous memory policy: 0 00:04:04.336 EAL: request: mp_malloc_sync 00:04:04.336 EAL: No shared files mode enabled, IPC is disabled 00:04:04.336 EAL: Heap on socket 0 was expanded by 2MB 00:04:04.336 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.336 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.336 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.336 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:04.336 00:04:04.336 00:04:04.336 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.336 http://cunit.sourceforge.net/ 00:04:04.336 00:04:04.336 00:04:04.336 Suite: components_suite 00:04:04.907 Test: vtophys_malloc_test ...passed 00:04:04.907 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:04.907 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.907 EAL: Restoring previous memory policy: 4 00:04:04.907 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.907 EAL: request: mp_malloc_sync 00:04:04.907 EAL: No shared files mode enabled, IPC is disabled 00:04:04.907 EAL: Heap on socket 0 was expanded by 4MB 00:04:04.907 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.907 EAL: request: mp_malloc_sync 00:04:04.907 EAL: No shared files mode enabled, IPC is disabled 00:04:04.907 EAL: Heap on socket 0 was shrunk by 4MB 00:04:04.907 EAL: Trying to obtain current memory policy. 00:04:04.907 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.907 EAL: Restoring previous memory policy: 4 00:04:04.907 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.907 EAL: request: mp_malloc_sync 00:04:04.907 EAL: No shared files mode enabled, IPC is disabled 00:04:04.907 EAL: Heap on socket 0 was expanded by 6MB 00:04:04.907 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.907 EAL: request: mp_malloc_sync 00:04:04.907 EAL: No shared files mode enabled, IPC is disabled 00:04:04.907 EAL: Heap on socket 0 was shrunk by 6MB 00:04:04.907 EAL: Trying to obtain current memory policy. 00:04:04.907 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.907 EAL: Restoring previous memory policy: 4 00:04:04.907 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.907 EAL: request: mp_malloc_sync 00:04:04.907 EAL: No shared files mode enabled, IPC is disabled 00:04:04.907 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.907 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.907 EAL: request: mp_malloc_sync 00:04:04.907 EAL: No shared files mode enabled, IPC is disabled 00:04:04.907 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.907 EAL: Trying to obtain current memory policy. 00:04:04.907 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.907 EAL: Restoring previous memory policy: 4 00:04:04.907 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.907 EAL: request: mp_malloc_sync 00:04:04.907 EAL: No shared files mode enabled, IPC is disabled 00:04:04.907 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.908 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.908 EAL: request: mp_malloc_sync 00:04:04.908 EAL: No shared files mode enabled, IPC is disabled 00:04:04.908 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.908 EAL: Trying to obtain current memory policy. 00:04:04.908 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.908 EAL: Restoring previous memory policy: 4 00:04:04.908 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.908 EAL: request: mp_malloc_sync 00:04:04.908 EAL: No shared files mode enabled, IPC is disabled 00:04:04.908 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.908 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.908 EAL: request: mp_malloc_sync 00:04:04.908 EAL: No shared files mode enabled, IPC is disabled 00:04:04.908 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.908 EAL: Trying to obtain current memory policy. 00:04:04.908 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.168 EAL: Restoring previous memory policy: 4 00:04:05.168 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.168 EAL: request: mp_malloc_sync 00:04:05.168 EAL: No shared files mode enabled, IPC is disabled 00:04:05.168 EAL: Heap on socket 0 was expanded by 66MB 00:04:05.168 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.168 EAL: request: mp_malloc_sync 00:04:05.168 EAL: No shared files mode enabled, IPC is disabled 00:04:05.168 EAL: Heap on socket 0 was shrunk by 66MB 00:04:05.168 EAL: Trying to obtain current memory policy. 00:04:05.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.168 EAL: Restoring previous memory policy: 4 00:04:05.168 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.168 EAL: request: mp_malloc_sync 00:04:05.168 EAL: No shared files mode enabled, IPC is disabled 00:04:05.168 EAL: Heap on socket 0 was expanded by 130MB 00:04:05.460 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.460 EAL: request: mp_malloc_sync 00:04:05.460 EAL: No shared files mode enabled, IPC is disabled 00:04:05.460 EAL: Heap on socket 0 was shrunk by 130MB 00:04:05.749 EAL: Trying to obtain current memory policy. 00:04:05.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.749 EAL: Restoring previous memory policy: 4 00:04:05.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.750 EAL: request: mp_malloc_sync 00:04:05.750 EAL: No shared files mode enabled, IPC is disabled 00:04:05.750 EAL: Heap on socket 0 was expanded by 258MB 00:04:06.319 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.319 EAL: request: mp_malloc_sync 00:04:06.319 EAL: No shared files mode enabled, IPC is disabled 00:04:06.319 EAL: Heap on socket 0 was shrunk by 258MB 00:04:06.580 EAL: Trying to obtain current memory policy. 00:04:06.580 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.840 EAL: Restoring previous memory policy: 4 00:04:06.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.840 EAL: request: mp_malloc_sync 00:04:06.840 EAL: No shared files mode enabled, IPC is disabled 00:04:06.840 EAL: Heap on socket 0 was expanded by 514MB 00:04:07.778 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.778 EAL: request: mp_malloc_sync 00:04:07.778 EAL: No shared files mode enabled, IPC is disabled 00:04:07.778 EAL: Heap on socket 0 was shrunk by 514MB 00:04:08.717 EAL: Trying to obtain current memory policy. 00:04:08.717 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.717 EAL: Restoring previous memory policy: 4 00:04:08.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.717 EAL: request: mp_malloc_sync 00:04:08.717 EAL: No shared files mode enabled, IPC is disabled 00:04:08.717 EAL: Heap on socket 0 was expanded by 1026MB 00:04:10.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.627 EAL: request: mp_malloc_sync 00:04:10.627 EAL: No shared files mode enabled, IPC is disabled 00:04:10.627 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:12.009 passed 00:04:12.009 00:04:12.009 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.009 suites 1 1 n/a 0 0 00:04:12.009 tests 2 2 2 0 0 00:04:12.009 asserts 5817 5817 5817 0 n/a 00:04:12.009 00:04:12.009 Elapsed time = 7.580 seconds 00:04:12.009 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.009 EAL: request: mp_malloc_sync 00:04:12.009 EAL: No shared files mode enabled, IPC is disabled 00:04:12.009 EAL: Heap on socket 0 was shrunk by 2MB 00:04:12.009 EAL: No shared files mode enabled, IPC is disabled 00:04:12.009 EAL: No shared files mode enabled, IPC is disabled 00:04:12.009 EAL: No shared files mode enabled, IPC is disabled 00:04:12.009 00:04:12.009 real 0m7.890s 00:04:12.009 user 0m6.930s 00:04:12.009 sys 0m0.811s 00:04:12.009 08:40:48 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.009 08:40:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:12.009 ************************************ 00:04:12.009 END TEST env_vtophys 00:04:12.009 ************************************ 00:04:12.270 08:40:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:12.270 08:40:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.270 08:40:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.270 08:40:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.270 ************************************ 00:04:12.270 START TEST env_pci 00:04:12.270 ************************************ 00:04:12.270 08:40:48 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:12.270 00:04:12.270 00:04:12.270 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.270 http://cunit.sourceforge.net/ 00:04:12.270 00:04:12.270 00:04:12.270 Suite: pci 00:04:12.270 Test: pci_hook ...[2024-10-05 08:40:48.583013] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56696 has claimed it 00:04:12.270 passed 00:04:12.270 00:04:12.270 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.270 suites 1 1 n/a 0 0 00:04:12.270 tests 1 1 1 0 0 00:04:12.270 asserts 25 25 25 0 n/a 00:04:12.270 00:04:12.270 Elapsed time = 0.007 seconds 00:04:12.270 EAL: Cannot find device (10000:00:01.0) 00:04:12.270 EAL: Failed to attach device on primary process 00:04:12.270 00:04:12.270 real 0m0.117s 00:04:12.270 user 0m0.050s 00:04:12.270 sys 0m0.065s 00:04:12.270 08:40:48 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.270 08:40:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:12.270 ************************************ 00:04:12.270 END TEST env_pci 00:04:12.270 ************************************ 00:04:12.270 08:40:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:12.270 08:40:48 env -- env/env.sh@15 -- # uname 00:04:12.270 08:40:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:12.270 08:40:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:12.270 08:40:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.270 08:40:48 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:12.270 08:40:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.270 08:40:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.270 ************************************ 00:04:12.270 START TEST env_dpdk_post_init 00:04:12.270 ************************************ 00:04:12.270 08:40:48 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:12.530 EAL: Detected CPU lcores: 10 00:04:12.530 EAL: Detected NUMA nodes: 1 00:04:12.530 EAL: Detected shared linkage of DPDK 00:04:12.530 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.530 EAL: Selected IOVA mode 'PA' 00:04:12.530 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.530 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:12.530 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:12.530 Starting DPDK initialization... 00:04:12.530 Starting SPDK post initialization... 00:04:12.530 SPDK NVMe probe 00:04:12.530 Attaching to 0000:00:10.0 00:04:12.530 Attaching to 0000:00:11.0 00:04:12.530 Attached to 0000:00:10.0 00:04:12.530 Attached to 0000:00:11.0 00:04:12.530 Cleaning up... 00:04:12.530 00:04:12.530 real 0m0.269s 00:04:12.530 user 0m0.075s 00:04:12.530 sys 0m0.096s 00:04:12.530 08:40:48 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.530 08:40:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.530 ************************************ 00:04:12.530 END TEST env_dpdk_post_init 00:04:12.530 ************************************ 00:04:12.790 08:40:49 env -- env/env.sh@26 -- # uname 00:04:12.791 08:40:49 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:12.791 08:40:49 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.791 08:40:49 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.791 08:40:49 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.791 08:40:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.791 ************************************ 00:04:12.791 START TEST env_mem_callbacks 00:04:12.791 ************************************ 00:04:12.791 08:40:49 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.791 EAL: Detected CPU lcores: 10 00:04:12.791 EAL: Detected NUMA nodes: 1 00:04:12.791 EAL: Detected shared linkage of DPDK 00:04:12.791 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.791 EAL: Selected IOVA mode 'PA' 00:04:12.791 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.791 00:04:12.791 00:04:12.791 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.791 http://cunit.sourceforge.net/ 00:04:12.791 00:04:12.791 00:04:12.791 Suite: memory 00:04:12.791 Test: test ... 00:04:12.791 register 0x200000200000 2097152 00:04:12.791 malloc 3145728 00:04:12.791 register 0x200000400000 4194304 00:04:12.791 buf 0x2000004fffc0 len 3145728 PASSED 00:04:12.791 malloc 64 00:04:12.791 buf 0x2000004ffec0 len 64 PASSED 00:04:12.791 malloc 4194304 00:04:12.791 register 0x200000800000 6291456 00:04:13.051 buf 0x2000009fffc0 len 4194304 PASSED 00:04:13.051 free 0x2000004fffc0 3145728 00:04:13.051 free 0x2000004ffec0 64 00:04:13.051 unregister 0x200000400000 4194304 PASSED 00:04:13.051 free 0x2000009fffc0 4194304 00:04:13.051 unregister 0x200000800000 6291456 PASSED 00:04:13.051 malloc 8388608 00:04:13.051 register 0x200000400000 10485760 00:04:13.051 buf 0x2000005fffc0 len 8388608 PASSED 00:04:13.051 free 0x2000005fffc0 8388608 00:04:13.051 unregister 0x200000400000 10485760 PASSED 00:04:13.051 passed 00:04:13.051 00:04:13.051 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.051 suites 1 1 n/a 0 0 00:04:13.051 tests 1 1 1 0 0 00:04:13.051 asserts 15 15 15 0 n/a 00:04:13.051 00:04:13.051 Elapsed time = 0.078 seconds 00:04:13.051 00:04:13.051 real 0m0.270s 00:04:13.051 user 0m0.099s 00:04:13.051 sys 0m0.069s 00:04:13.051 08:40:49 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.051 08:40:49 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:13.051 ************************************ 00:04:13.051 END TEST env_mem_callbacks 00:04:13.051 ************************************ 00:04:13.051 ************************************ 00:04:13.051 END TEST env 00:04:13.051 ************************************ 00:04:13.051 00:04:13.051 real 0m9.446s 00:04:13.051 user 0m7.642s 00:04:13.051 sys 0m1.449s 00:04:13.051 08:40:49 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.051 08:40:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.051 08:40:49 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.051 08:40:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.051 08:40:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.051 08:40:49 -- common/autotest_common.sh@10 -- # set +x 00:04:13.051 ************************************ 00:04:13.051 START TEST rpc 00:04:13.051 ************************************ 00:04:13.051 08:40:49 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.311 * Looking for test storage... 00:04:13.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:13.312 08:40:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:13.312 08:40:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:13.312 08:40:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:13.312 08:40:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.312 08:40:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:13.312 08:40:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:13.312 08:40:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:13.312 08:40:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:13.312 08:40:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:13.312 08:40:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:13.312 08:40:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:13.312 08:40:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:13.312 08:40:49 rpc -- scripts/common.sh@345 -- # : 1 00:04:13.312 08:40:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:13.312 08:40:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.312 08:40:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:13.312 08:40:49 rpc -- scripts/common.sh@353 -- # local d=1 00:04:13.312 08:40:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.312 08:40:49 rpc -- scripts/common.sh@355 -- # echo 1 00:04:13.312 08:40:49 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:13.312 08:40:49 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:13.312 08:40:49 rpc -- scripts/common.sh@353 -- # local d=2 00:04:13.312 08:40:49 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.312 08:40:49 rpc -- scripts/common.sh@355 -- # echo 2 00:04:13.312 08:40:49 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:13.312 08:40:49 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:13.312 08:40:49 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:13.312 08:40:49 rpc -- scripts/common.sh@368 -- # return 0 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:13.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.312 --rc genhtml_branch_coverage=1 00:04:13.312 --rc genhtml_function_coverage=1 00:04:13.312 --rc genhtml_legend=1 00:04:13.312 --rc geninfo_all_blocks=1 00:04:13.312 --rc geninfo_unexecuted_blocks=1 00:04:13.312 00:04:13.312 ' 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:13.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.312 --rc genhtml_branch_coverage=1 00:04:13.312 --rc genhtml_function_coverage=1 00:04:13.312 --rc genhtml_legend=1 00:04:13.312 --rc geninfo_all_blocks=1 00:04:13.312 --rc geninfo_unexecuted_blocks=1 00:04:13.312 00:04:13.312 ' 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:13.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.312 --rc genhtml_branch_coverage=1 00:04:13.312 --rc genhtml_function_coverage=1 00:04:13.312 --rc genhtml_legend=1 00:04:13.312 --rc geninfo_all_blocks=1 00:04:13.312 --rc geninfo_unexecuted_blocks=1 00:04:13.312 00:04:13.312 ' 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:13.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.312 --rc genhtml_branch_coverage=1 00:04:13.312 --rc genhtml_function_coverage=1 00:04:13.312 --rc genhtml_legend=1 00:04:13.312 --rc geninfo_all_blocks=1 00:04:13.312 --rc geninfo_unexecuted_blocks=1 00:04:13.312 00:04:13.312 ' 00:04:13.312 08:40:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56823 00:04:13.312 08:40:49 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:13.312 08:40:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.312 08:40:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56823 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@831 -- # '[' -z 56823 ']' 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:13.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:13.312 08:40:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.573 [2024-10-05 08:40:49.797061] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:13.573 [2024-10-05 08:40:49.797194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56823 ] 00:04:13.573 [2024-10-05 08:40:49.969920] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.832 [2024-10-05 08:40:50.169528] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:13.832 [2024-10-05 08:40:50.169583] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56823' to capture a snapshot of events at runtime. 00:04:13.832 [2024-10-05 08:40:50.169593] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:13.832 [2024-10-05 08:40:50.169603] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:13.832 [2024-10-05 08:40:50.169610] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56823 for offline analysis/debug. 00:04:13.832 [2024-10-05 08:40:50.170915] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.771 08:40:50 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:14.771 08:40:50 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:14.771 08:40:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:14.771 08:40:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:14.771 08:40:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:14.771 08:40:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:14.771 08:40:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.771 08:40:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.771 08:40:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.771 ************************************ 00:04:14.771 START TEST rpc_integrity 00:04:14.771 ************************************ 00:04:14.771 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:14.771 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.771 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.771 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.771 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.771 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.771 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.771 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.771 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.771 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.771 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.771 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.771 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:14.771 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.771 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.771 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.771 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.771 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.771 { 00:04:14.771 "name": "Malloc0", 00:04:14.771 "aliases": [ 00:04:14.771 "9dc03c8a-eb4d-420d-aeff-1f216140f976" 00:04:14.771 ], 00:04:14.771 "product_name": "Malloc disk", 00:04:14.771 "block_size": 512, 00:04:14.771 "num_blocks": 16384, 00:04:14.771 "uuid": "9dc03c8a-eb4d-420d-aeff-1f216140f976", 00:04:14.771 "assigned_rate_limits": { 00:04:14.771 "rw_ios_per_sec": 0, 00:04:14.771 "rw_mbytes_per_sec": 0, 00:04:14.771 "r_mbytes_per_sec": 0, 00:04:14.771 "w_mbytes_per_sec": 0 00:04:14.771 }, 00:04:14.771 "claimed": false, 00:04:14.771 "zoned": false, 00:04:14.771 "supported_io_types": { 00:04:14.771 "read": true, 00:04:14.771 "write": true, 00:04:14.771 "unmap": true, 00:04:14.771 "flush": true, 00:04:14.771 "reset": true, 00:04:14.771 "nvme_admin": false, 00:04:14.771 "nvme_io": false, 00:04:14.771 "nvme_io_md": false, 00:04:14.771 "write_zeroes": true, 00:04:14.771 "zcopy": true, 00:04:14.771 "get_zone_info": false, 00:04:14.771 "zone_management": false, 00:04:14.771 "zone_append": false, 00:04:14.771 "compare": false, 00:04:14.771 "compare_and_write": false, 00:04:14.771 "abort": true, 00:04:14.771 "seek_hole": false, 00:04:14.771 "seek_data": false, 00:04:14.771 "copy": true, 00:04:14.771 "nvme_iov_md": false 00:04:14.771 }, 00:04:14.771 "memory_domains": [ 00:04:14.771 { 00:04:14.771 "dma_device_id": "system", 00:04:14.771 "dma_device_type": 1 00:04:14.771 }, 00:04:14.771 { 00:04:14.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.771 "dma_device_type": 2 00:04:14.771 } 00:04:14.771 ], 00:04:14.771 "driver_specific": {} 00:04:14.772 } 00:04:14.772 ]' 00:04:14.772 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.772 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.772 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:14.772 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.772 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.772 [2024-10-05 08:40:51.153520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:14.772 [2024-10-05 08:40:51.153574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.772 [2024-10-05 08:40:51.153598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:14.772 [2024-10-05 08:40:51.153609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.772 [2024-10-05 08:40:51.155808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.772 [2024-10-05 08:40:51.155845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.772 Passthru0 00:04:14.772 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.772 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.772 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.772 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.772 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.772 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.772 { 00:04:14.772 "name": "Malloc0", 00:04:14.772 "aliases": [ 00:04:14.772 "9dc03c8a-eb4d-420d-aeff-1f216140f976" 00:04:14.772 ], 00:04:14.772 "product_name": "Malloc disk", 00:04:14.772 "block_size": 512, 00:04:14.772 "num_blocks": 16384, 00:04:14.772 "uuid": "9dc03c8a-eb4d-420d-aeff-1f216140f976", 00:04:14.772 "assigned_rate_limits": { 00:04:14.772 "rw_ios_per_sec": 0, 00:04:14.772 "rw_mbytes_per_sec": 0, 00:04:14.772 "r_mbytes_per_sec": 0, 00:04:14.772 "w_mbytes_per_sec": 0 00:04:14.772 }, 00:04:14.772 "claimed": true, 00:04:14.772 "claim_type": "exclusive_write", 00:04:14.772 "zoned": false, 00:04:14.772 "supported_io_types": { 00:04:14.772 "read": true, 00:04:14.772 "write": true, 00:04:14.772 "unmap": true, 00:04:14.772 "flush": true, 00:04:14.772 "reset": true, 00:04:14.772 "nvme_admin": false, 00:04:14.772 "nvme_io": false, 00:04:14.772 "nvme_io_md": false, 00:04:14.772 "write_zeroes": true, 00:04:14.772 "zcopy": true, 00:04:14.772 "get_zone_info": false, 00:04:14.772 "zone_management": false, 00:04:14.772 "zone_append": false, 00:04:14.772 "compare": false, 00:04:14.772 "compare_and_write": false, 00:04:14.772 "abort": true, 00:04:14.772 "seek_hole": false, 00:04:14.772 "seek_data": false, 00:04:14.772 "copy": true, 00:04:14.772 "nvme_iov_md": false 00:04:14.772 }, 00:04:14.772 "memory_domains": [ 00:04:14.772 { 00:04:14.772 "dma_device_id": "system", 00:04:14.772 "dma_device_type": 1 00:04:14.772 }, 00:04:14.772 { 00:04:14.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.772 "dma_device_type": 2 00:04:14.772 } 00:04:14.772 ], 00:04:14.772 "driver_specific": {} 00:04:14.772 }, 00:04:14.772 { 00:04:14.772 "name": "Passthru0", 00:04:14.772 "aliases": [ 00:04:14.772 "0db73e7c-1e06-5e18-9457-b1f761acdfd1" 00:04:14.772 ], 00:04:14.772 "product_name": "passthru", 00:04:14.772 "block_size": 512, 00:04:14.772 "num_blocks": 16384, 00:04:14.772 "uuid": "0db73e7c-1e06-5e18-9457-b1f761acdfd1", 00:04:14.772 "assigned_rate_limits": { 00:04:14.772 "rw_ios_per_sec": 0, 00:04:14.772 "rw_mbytes_per_sec": 0, 00:04:14.772 "r_mbytes_per_sec": 0, 00:04:14.772 "w_mbytes_per_sec": 0 00:04:14.772 }, 00:04:14.772 "claimed": false, 00:04:14.772 "zoned": false, 00:04:14.772 "supported_io_types": { 00:04:14.772 "read": true, 00:04:14.772 "write": true, 00:04:14.772 "unmap": true, 00:04:14.772 "flush": true, 00:04:14.772 "reset": true, 00:04:14.772 "nvme_admin": false, 00:04:14.772 "nvme_io": false, 00:04:14.772 "nvme_io_md": false, 00:04:14.772 "write_zeroes": true, 00:04:14.772 "zcopy": true, 00:04:14.772 "get_zone_info": false, 00:04:14.772 "zone_management": false, 00:04:14.772 "zone_append": false, 00:04:14.772 "compare": false, 00:04:14.772 "compare_and_write": false, 00:04:14.772 "abort": true, 00:04:14.772 "seek_hole": false, 00:04:14.772 "seek_data": false, 00:04:14.772 "copy": true, 00:04:14.772 "nvme_iov_md": false 00:04:14.772 }, 00:04:14.772 "memory_domains": [ 00:04:14.772 { 00:04:14.772 "dma_device_id": "system", 00:04:14.772 "dma_device_type": 1 00:04:14.772 }, 00:04:14.772 { 00:04:14.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.772 "dma_device_type": 2 00:04:14.772 } 00:04:14.772 ], 00:04:14.772 "driver_specific": { 00:04:14.772 "passthru": { 00:04:14.772 "name": "Passthru0", 00:04:14.772 "base_bdev_name": "Malloc0" 00:04:14.772 } 00:04:14.772 } 00:04:14.772 } 00:04:14.772 ]' 00:04:14.772 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.772 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.772 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.772 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.772 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.772 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.772 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:14.772 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.772 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.032 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.032 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.032 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.032 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.032 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.032 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.032 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.032 08:40:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.032 00:04:15.032 real 0m0.329s 00:04:15.032 user 0m0.177s 00:04:15.032 sys 0m0.048s 00:04:15.032 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.032 08:40:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.032 ************************************ 00:04:15.032 END TEST rpc_integrity 00:04:15.032 ************************************ 00:04:15.032 08:40:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:15.032 08:40:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.032 08:40:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.032 08:40:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.032 ************************************ 00:04:15.032 START TEST rpc_plugins 00:04:15.032 ************************************ 00:04:15.032 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:15.032 08:40:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:15.032 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.032 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.032 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.032 08:40:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:15.032 08:40:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:15.032 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.032 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.032 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.032 08:40:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:15.032 { 00:04:15.032 "name": "Malloc1", 00:04:15.032 "aliases": [ 00:04:15.032 "2ecc5e95-8d6a-4602-9f68-8169d59e7bb8" 00:04:15.032 ], 00:04:15.032 "product_name": "Malloc disk", 00:04:15.032 "block_size": 4096, 00:04:15.032 "num_blocks": 256, 00:04:15.032 "uuid": "2ecc5e95-8d6a-4602-9f68-8169d59e7bb8", 00:04:15.032 "assigned_rate_limits": { 00:04:15.032 "rw_ios_per_sec": 0, 00:04:15.032 "rw_mbytes_per_sec": 0, 00:04:15.032 "r_mbytes_per_sec": 0, 00:04:15.032 "w_mbytes_per_sec": 0 00:04:15.032 }, 00:04:15.032 "claimed": false, 00:04:15.032 "zoned": false, 00:04:15.032 "supported_io_types": { 00:04:15.032 "read": true, 00:04:15.032 "write": true, 00:04:15.032 "unmap": true, 00:04:15.032 "flush": true, 00:04:15.032 "reset": true, 00:04:15.032 "nvme_admin": false, 00:04:15.032 "nvme_io": false, 00:04:15.032 "nvme_io_md": false, 00:04:15.032 "write_zeroes": true, 00:04:15.032 "zcopy": true, 00:04:15.032 "get_zone_info": false, 00:04:15.032 "zone_management": false, 00:04:15.032 "zone_append": false, 00:04:15.032 "compare": false, 00:04:15.032 "compare_and_write": false, 00:04:15.032 "abort": true, 00:04:15.032 "seek_hole": false, 00:04:15.032 "seek_data": false, 00:04:15.032 "copy": true, 00:04:15.032 "nvme_iov_md": false 00:04:15.032 }, 00:04:15.032 "memory_domains": [ 00:04:15.032 { 00:04:15.032 "dma_device_id": "system", 00:04:15.032 "dma_device_type": 1 00:04:15.032 }, 00:04:15.032 { 00:04:15.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.032 "dma_device_type": 2 00:04:15.032 } 00:04:15.032 ], 00:04:15.032 "driver_specific": {} 00:04:15.032 } 00:04:15.032 ]' 00:04:15.032 08:40:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:15.032 08:40:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:15.032 08:40:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:15.032 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.032 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.032 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.032 08:40:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:15.032 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.032 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.303 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.303 08:40:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:15.303 08:40:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:15.303 08:40:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:15.303 00:04:15.303 real 0m0.164s 00:04:15.303 user 0m0.092s 00:04:15.303 sys 0m0.034s 00:04:15.303 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.303 08:40:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.303 ************************************ 00:04:15.303 END TEST rpc_plugins 00:04:15.303 ************************************ 00:04:15.303 08:40:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:15.303 08:40:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.303 08:40:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.303 08:40:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.303 ************************************ 00:04:15.303 START TEST rpc_trace_cmd_test 00:04:15.303 ************************************ 00:04:15.303 08:40:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:15.303 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:15.303 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:15.303 08:40:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.303 08:40:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.303 08:40:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.303 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:15.303 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56823", 00:04:15.303 "tpoint_group_mask": "0x8", 00:04:15.303 "iscsi_conn": { 00:04:15.303 "mask": "0x2", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "scsi": { 00:04:15.303 "mask": "0x4", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "bdev": { 00:04:15.303 "mask": "0x8", 00:04:15.303 "tpoint_mask": "0xffffffffffffffff" 00:04:15.303 }, 00:04:15.303 "nvmf_rdma": { 00:04:15.303 "mask": "0x10", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "nvmf_tcp": { 00:04:15.303 "mask": "0x20", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "ftl": { 00:04:15.303 "mask": "0x40", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "blobfs": { 00:04:15.303 "mask": "0x80", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "dsa": { 00:04:15.303 "mask": "0x200", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "thread": { 00:04:15.303 "mask": "0x400", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "nvme_pcie": { 00:04:15.303 "mask": "0x800", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "iaa": { 00:04:15.303 "mask": "0x1000", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "nvme_tcp": { 00:04:15.303 "mask": "0x2000", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "bdev_nvme": { 00:04:15.303 "mask": "0x4000", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "sock": { 00:04:15.303 "mask": "0x8000", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "blob": { 00:04:15.303 "mask": "0x10000", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "bdev_raid": { 00:04:15.303 "mask": "0x20000", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 }, 00:04:15.303 "scheduler": { 00:04:15.303 "mask": "0x40000", 00:04:15.303 "tpoint_mask": "0x0" 00:04:15.303 } 00:04:15.303 }' 00:04:15.303 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:15.303 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:15.303 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:15.303 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:15.303 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:15.576 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:15.576 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:15.576 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:15.576 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:15.576 08:40:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:15.576 00:04:15.576 real 0m0.233s 00:04:15.576 user 0m0.186s 00:04:15.576 sys 0m0.036s 00:04:15.576 08:40:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.576 08:40:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.576 ************************************ 00:04:15.576 END TEST rpc_trace_cmd_test 00:04:15.576 ************************************ 00:04:15.576 08:40:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:15.576 08:40:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:15.576 08:40:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:15.576 08:40:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.576 08:40:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.576 08:40:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.576 ************************************ 00:04:15.576 START TEST rpc_daemon_integrity 00:04:15.576 ************************************ 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.576 08:40:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.576 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.576 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.576 { 00:04:15.576 "name": "Malloc2", 00:04:15.576 "aliases": [ 00:04:15.576 "81181cd3-c812-4c25-bace-e9cf9a9a5202" 00:04:15.576 ], 00:04:15.576 "product_name": "Malloc disk", 00:04:15.576 "block_size": 512, 00:04:15.576 "num_blocks": 16384, 00:04:15.576 "uuid": "81181cd3-c812-4c25-bace-e9cf9a9a5202", 00:04:15.576 "assigned_rate_limits": { 00:04:15.576 "rw_ios_per_sec": 0, 00:04:15.576 "rw_mbytes_per_sec": 0, 00:04:15.576 "r_mbytes_per_sec": 0, 00:04:15.576 "w_mbytes_per_sec": 0 00:04:15.576 }, 00:04:15.576 "claimed": false, 00:04:15.576 "zoned": false, 00:04:15.576 "supported_io_types": { 00:04:15.576 "read": true, 00:04:15.576 "write": true, 00:04:15.576 "unmap": true, 00:04:15.576 "flush": true, 00:04:15.576 "reset": true, 00:04:15.576 "nvme_admin": false, 00:04:15.576 "nvme_io": false, 00:04:15.576 "nvme_io_md": false, 00:04:15.576 "write_zeroes": true, 00:04:15.576 "zcopy": true, 00:04:15.576 "get_zone_info": false, 00:04:15.576 "zone_management": false, 00:04:15.576 "zone_append": false, 00:04:15.576 "compare": false, 00:04:15.576 "compare_and_write": false, 00:04:15.576 "abort": true, 00:04:15.576 "seek_hole": false, 00:04:15.576 "seek_data": false, 00:04:15.576 "copy": true, 00:04:15.576 "nvme_iov_md": false 00:04:15.576 }, 00:04:15.576 "memory_domains": [ 00:04:15.576 { 00:04:15.576 "dma_device_id": "system", 00:04:15.576 "dma_device_type": 1 00:04:15.576 }, 00:04:15.576 { 00:04:15.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.576 "dma_device_type": 2 00:04:15.576 } 00:04:15.576 ], 00:04:15.576 "driver_specific": {} 00:04:15.576 } 00:04:15.576 ]' 00:04:15.576 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.836 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.836 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:15.836 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.836 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.837 [2024-10-05 08:40:52.063614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:15.837 [2024-10-05 08:40:52.063680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.837 [2024-10-05 08:40:52.063705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:15.837 [2024-10-05 08:40:52.063716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.837 [2024-10-05 08:40:52.066011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.837 [2024-10-05 08:40:52.066044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.837 Passthru0 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.837 { 00:04:15.837 "name": "Malloc2", 00:04:15.837 "aliases": [ 00:04:15.837 "81181cd3-c812-4c25-bace-e9cf9a9a5202" 00:04:15.837 ], 00:04:15.837 "product_name": "Malloc disk", 00:04:15.837 "block_size": 512, 00:04:15.837 "num_blocks": 16384, 00:04:15.837 "uuid": "81181cd3-c812-4c25-bace-e9cf9a9a5202", 00:04:15.837 "assigned_rate_limits": { 00:04:15.837 "rw_ios_per_sec": 0, 00:04:15.837 "rw_mbytes_per_sec": 0, 00:04:15.837 "r_mbytes_per_sec": 0, 00:04:15.837 "w_mbytes_per_sec": 0 00:04:15.837 }, 00:04:15.837 "claimed": true, 00:04:15.837 "claim_type": "exclusive_write", 00:04:15.837 "zoned": false, 00:04:15.837 "supported_io_types": { 00:04:15.837 "read": true, 00:04:15.837 "write": true, 00:04:15.837 "unmap": true, 00:04:15.837 "flush": true, 00:04:15.837 "reset": true, 00:04:15.837 "nvme_admin": false, 00:04:15.837 "nvme_io": false, 00:04:15.837 "nvme_io_md": false, 00:04:15.837 "write_zeroes": true, 00:04:15.837 "zcopy": true, 00:04:15.837 "get_zone_info": false, 00:04:15.837 "zone_management": false, 00:04:15.837 "zone_append": false, 00:04:15.837 "compare": false, 00:04:15.837 "compare_and_write": false, 00:04:15.837 "abort": true, 00:04:15.837 "seek_hole": false, 00:04:15.837 "seek_data": false, 00:04:15.837 "copy": true, 00:04:15.837 "nvme_iov_md": false 00:04:15.837 }, 00:04:15.837 "memory_domains": [ 00:04:15.837 { 00:04:15.837 "dma_device_id": "system", 00:04:15.837 "dma_device_type": 1 00:04:15.837 }, 00:04:15.837 { 00:04:15.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.837 "dma_device_type": 2 00:04:15.837 } 00:04:15.837 ], 00:04:15.837 "driver_specific": {} 00:04:15.837 }, 00:04:15.837 { 00:04:15.837 "name": "Passthru0", 00:04:15.837 "aliases": [ 00:04:15.837 "273c049f-12e7-5833-b9f5-c164fc2263e1" 00:04:15.837 ], 00:04:15.837 "product_name": "passthru", 00:04:15.837 "block_size": 512, 00:04:15.837 "num_blocks": 16384, 00:04:15.837 "uuid": "273c049f-12e7-5833-b9f5-c164fc2263e1", 00:04:15.837 "assigned_rate_limits": { 00:04:15.837 "rw_ios_per_sec": 0, 00:04:15.837 "rw_mbytes_per_sec": 0, 00:04:15.837 "r_mbytes_per_sec": 0, 00:04:15.837 "w_mbytes_per_sec": 0 00:04:15.837 }, 00:04:15.837 "claimed": false, 00:04:15.837 "zoned": false, 00:04:15.837 "supported_io_types": { 00:04:15.837 "read": true, 00:04:15.837 "write": true, 00:04:15.837 "unmap": true, 00:04:15.837 "flush": true, 00:04:15.837 "reset": true, 00:04:15.837 "nvme_admin": false, 00:04:15.837 "nvme_io": false, 00:04:15.837 "nvme_io_md": false, 00:04:15.837 "write_zeroes": true, 00:04:15.837 "zcopy": true, 00:04:15.837 "get_zone_info": false, 00:04:15.837 "zone_management": false, 00:04:15.837 "zone_append": false, 00:04:15.837 "compare": false, 00:04:15.837 "compare_and_write": false, 00:04:15.837 "abort": true, 00:04:15.837 "seek_hole": false, 00:04:15.837 "seek_data": false, 00:04:15.837 "copy": true, 00:04:15.837 "nvme_iov_md": false 00:04:15.837 }, 00:04:15.837 "memory_domains": [ 00:04:15.837 { 00:04:15.837 "dma_device_id": "system", 00:04:15.837 "dma_device_type": 1 00:04:15.837 }, 00:04:15.837 { 00:04:15.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.837 "dma_device_type": 2 00:04:15.837 } 00:04:15.837 ], 00:04:15.837 "driver_specific": { 00:04:15.837 "passthru": { 00:04:15.837 "name": "Passthru0", 00:04:15.837 "base_bdev_name": "Malloc2" 00:04:15.837 } 00:04:15.837 } 00:04:15.837 } 00:04:15.837 ]' 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.837 00:04:15.837 real 0m0.348s 00:04:15.837 user 0m0.202s 00:04:15.837 sys 0m0.050s 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.837 08:40:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.837 ************************************ 00:04:15.837 END TEST rpc_daemon_integrity 00:04:15.837 ************************************ 00:04:15.837 08:40:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:15.837 08:40:52 rpc -- rpc/rpc.sh@84 -- # killprocess 56823 00:04:15.837 08:40:52 rpc -- common/autotest_common.sh@950 -- # '[' -z 56823 ']' 00:04:15.837 08:40:52 rpc -- common/autotest_common.sh@954 -- # kill -0 56823 00:04:16.097 08:40:52 rpc -- common/autotest_common.sh@955 -- # uname 00:04:16.097 08:40:52 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:16.097 08:40:52 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56823 00:04:16.097 08:40:52 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:16.097 08:40:52 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:16.097 killing process with pid 56823 00:04:16.097 08:40:52 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56823' 00:04:16.097 08:40:52 rpc -- common/autotest_common.sh@969 -- # kill 56823 00:04:16.097 08:40:52 rpc -- common/autotest_common.sh@974 -- # wait 56823 00:04:18.637 00:04:18.637 real 0m5.494s 00:04:18.637 user 0m5.955s 00:04:18.637 sys 0m0.929s 00:04:18.637 08:40:54 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.637 08:40:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.637 ************************************ 00:04:18.637 END TEST rpc 00:04:18.637 ************************************ 00:04:18.637 08:40:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:18.637 08:40:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.637 08:40:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.637 08:40:55 -- common/autotest_common.sh@10 -- # set +x 00:04:18.637 ************************************ 00:04:18.637 START TEST skip_rpc 00:04:18.637 ************************************ 00:04:18.637 08:40:55 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:18.898 * Looking for test storage... 00:04:18.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.898 08:40:55 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:18.898 08:40:55 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:18.898 08:40:55 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:18.898 08:40:55 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.898 08:40:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:18.898 08:40:55 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.898 08:40:55 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:18.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.898 --rc genhtml_branch_coverage=1 00:04:18.898 --rc genhtml_function_coverage=1 00:04:18.898 --rc genhtml_legend=1 00:04:18.898 --rc geninfo_all_blocks=1 00:04:18.898 --rc geninfo_unexecuted_blocks=1 00:04:18.898 00:04:18.898 ' 00:04:18.898 08:40:55 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:18.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.898 --rc genhtml_branch_coverage=1 00:04:18.898 --rc genhtml_function_coverage=1 00:04:18.898 --rc genhtml_legend=1 00:04:18.898 --rc geninfo_all_blocks=1 00:04:18.898 --rc geninfo_unexecuted_blocks=1 00:04:18.898 00:04:18.898 ' 00:04:18.898 08:40:55 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:18.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.898 --rc genhtml_branch_coverage=1 00:04:18.898 --rc genhtml_function_coverage=1 00:04:18.898 --rc genhtml_legend=1 00:04:18.898 --rc geninfo_all_blocks=1 00:04:18.898 --rc geninfo_unexecuted_blocks=1 00:04:18.898 00:04:18.898 ' 00:04:18.898 08:40:55 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:18.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.898 --rc genhtml_branch_coverage=1 00:04:18.898 --rc genhtml_function_coverage=1 00:04:18.898 --rc genhtml_legend=1 00:04:18.898 --rc geninfo_all_blocks=1 00:04:18.898 --rc geninfo_unexecuted_blocks=1 00:04:18.898 00:04:18.898 ' 00:04:18.898 08:40:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:18.898 08:40:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:18.898 08:40:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:18.898 08:40:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.898 08:40:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.898 08:40:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.898 ************************************ 00:04:18.898 START TEST skip_rpc 00:04:18.899 ************************************ 00:04:18.899 08:40:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:18.899 08:40:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57057 00:04:18.899 08:40:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:18.899 08:40:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.899 08:40:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:19.159 [2024-10-05 08:40:55.376148] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:19.159 [2024-10-05 08:40:55.376254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57057 ] 00:04:19.159 [2024-10-05 08:40:55.539985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.418 [2024-10-05 08:40:55.776657] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57057 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57057 ']' 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57057 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57057 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:24.700 killing process with pid 57057 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57057' 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57057 00:04:24.700 08:41:00 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57057 00:04:26.610 00:04:26.610 real 0m7.673s 00:04:26.610 user 0m7.024s 00:04:26.610 sys 0m0.566s 00:04:26.610 08:41:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.610 08:41:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.610 ************************************ 00:04:26.610 END TEST skip_rpc 00:04:26.610 ************************************ 00:04:26.610 08:41:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:26.610 08:41:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.610 08:41:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.610 08:41:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.610 ************************************ 00:04:26.610 START TEST skip_rpc_with_json 00:04:26.610 ************************************ 00:04:26.610 08:41:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:26.610 08:41:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:26.610 08:41:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57167 00:04:26.610 08:41:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:26.610 08:41:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:26.610 08:41:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57167 00:04:26.610 08:41:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57167 ']' 00:04:26.610 08:41:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:26.610 08:41:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:26.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:26.610 08:41:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:26.610 08:41:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:26.610 08:41:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.870 [2024-10-05 08:41:03.114077] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:26.870 [2024-10-05 08:41:03.114188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57167 ] 00:04:26.870 [2024-10-05 08:41:03.275760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.130 [2024-10-05 08:41:03.519855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.080 [2024-10-05 08:41:04.479377] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:28.080 request: 00:04:28.080 { 00:04:28.080 "trtype": "tcp", 00:04:28.080 "method": "nvmf_get_transports", 00:04:28.080 "req_id": 1 00:04:28.080 } 00:04:28.080 Got JSON-RPC error response 00:04:28.080 response: 00:04:28.080 { 00:04:28.080 "code": -19, 00:04:28.080 "message": "No such device" 00:04:28.080 } 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.080 [2024-10-05 08:41:04.491450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.080 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.340 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.340 08:41:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.340 { 00:04:28.340 "subsystems": [ 00:04:28.340 { 00:04:28.340 "subsystem": "fsdev", 00:04:28.340 "config": [ 00:04:28.340 { 00:04:28.340 "method": "fsdev_set_opts", 00:04:28.340 "params": { 00:04:28.340 "fsdev_io_pool_size": 65535, 00:04:28.340 "fsdev_io_cache_size": 256 00:04:28.340 } 00:04:28.340 } 00:04:28.340 ] 00:04:28.340 }, 00:04:28.340 { 00:04:28.340 "subsystem": "keyring", 00:04:28.340 "config": [] 00:04:28.340 }, 00:04:28.340 { 00:04:28.340 "subsystem": "iobuf", 00:04:28.340 "config": [ 00:04:28.340 { 00:04:28.340 "method": "iobuf_set_options", 00:04:28.340 "params": { 00:04:28.340 "small_pool_count": 8192, 00:04:28.340 "large_pool_count": 1024, 00:04:28.340 "small_bufsize": 8192, 00:04:28.340 "large_bufsize": 135168 00:04:28.340 } 00:04:28.340 } 00:04:28.340 ] 00:04:28.340 }, 00:04:28.340 { 00:04:28.340 "subsystem": "sock", 00:04:28.340 "config": [ 00:04:28.340 { 00:04:28.340 "method": "sock_set_default_impl", 00:04:28.340 "params": { 00:04:28.340 "impl_name": "posix" 00:04:28.340 } 00:04:28.340 }, 00:04:28.340 { 00:04:28.340 "method": "sock_impl_set_options", 00:04:28.340 "params": { 00:04:28.340 "impl_name": "ssl", 00:04:28.340 "recv_buf_size": 4096, 00:04:28.340 "send_buf_size": 4096, 00:04:28.340 "enable_recv_pipe": true, 00:04:28.340 "enable_quickack": false, 00:04:28.340 "enable_placement_id": 0, 00:04:28.341 "enable_zerocopy_send_server": true, 00:04:28.341 "enable_zerocopy_send_client": false, 00:04:28.341 "zerocopy_threshold": 0, 00:04:28.341 "tls_version": 0, 00:04:28.341 "enable_ktls": false 00:04:28.341 } 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "method": "sock_impl_set_options", 00:04:28.341 "params": { 00:04:28.341 "impl_name": "posix", 00:04:28.341 "recv_buf_size": 2097152, 00:04:28.341 "send_buf_size": 2097152, 00:04:28.341 "enable_recv_pipe": true, 00:04:28.341 "enable_quickack": false, 00:04:28.341 "enable_placement_id": 0, 00:04:28.341 "enable_zerocopy_send_server": true, 00:04:28.341 "enable_zerocopy_send_client": false, 00:04:28.341 "zerocopy_threshold": 0, 00:04:28.341 "tls_version": 0, 00:04:28.341 "enable_ktls": false 00:04:28.341 } 00:04:28.341 } 00:04:28.341 ] 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "subsystem": "vmd", 00:04:28.341 "config": [] 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "subsystem": "accel", 00:04:28.341 "config": [ 00:04:28.341 { 00:04:28.341 "method": "accel_set_options", 00:04:28.341 "params": { 00:04:28.341 "small_cache_size": 128, 00:04:28.341 "large_cache_size": 16, 00:04:28.341 "task_count": 2048, 00:04:28.341 "sequence_count": 2048, 00:04:28.341 "buf_count": 2048 00:04:28.341 } 00:04:28.341 } 00:04:28.341 ] 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "subsystem": "bdev", 00:04:28.341 "config": [ 00:04:28.341 { 00:04:28.341 "method": "bdev_set_options", 00:04:28.341 "params": { 00:04:28.341 "bdev_io_pool_size": 65535, 00:04:28.341 "bdev_io_cache_size": 256, 00:04:28.341 "bdev_auto_examine": true, 00:04:28.341 "iobuf_small_cache_size": 128, 00:04:28.341 "iobuf_large_cache_size": 16 00:04:28.341 } 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "method": "bdev_raid_set_options", 00:04:28.341 "params": { 00:04:28.341 "process_window_size_kb": 1024, 00:04:28.341 "process_max_bandwidth_mb_sec": 0 00:04:28.341 } 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "method": "bdev_iscsi_set_options", 00:04:28.341 "params": { 00:04:28.341 "timeout_sec": 30 00:04:28.341 } 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "method": "bdev_nvme_set_options", 00:04:28.341 "params": { 00:04:28.341 "action_on_timeout": "none", 00:04:28.341 "timeout_us": 0, 00:04:28.341 "timeout_admin_us": 0, 00:04:28.341 "keep_alive_timeout_ms": 10000, 00:04:28.341 "arbitration_burst": 0, 00:04:28.341 "low_priority_weight": 0, 00:04:28.341 "medium_priority_weight": 0, 00:04:28.341 "high_priority_weight": 0, 00:04:28.341 "nvme_adminq_poll_period_us": 10000, 00:04:28.341 "nvme_ioq_poll_period_us": 0, 00:04:28.341 "io_queue_requests": 0, 00:04:28.341 "delay_cmd_submit": true, 00:04:28.341 "transport_retry_count": 4, 00:04:28.341 "bdev_retry_count": 3, 00:04:28.341 "transport_ack_timeout": 0, 00:04:28.341 "ctrlr_loss_timeout_sec": 0, 00:04:28.341 "reconnect_delay_sec": 0, 00:04:28.341 "fast_io_fail_timeout_sec": 0, 00:04:28.341 "disable_auto_failback": false, 00:04:28.341 "generate_uuids": false, 00:04:28.341 "transport_tos": 0, 00:04:28.341 "nvme_error_stat": false, 00:04:28.341 "rdma_srq_size": 0, 00:04:28.341 "io_path_stat": false, 00:04:28.341 "allow_accel_sequence": false, 00:04:28.341 "rdma_max_cq_size": 0, 00:04:28.341 "rdma_cm_event_timeout_ms": 0, 00:04:28.341 "dhchap_digests": [ 00:04:28.341 "sha256", 00:04:28.341 "sha384", 00:04:28.341 "sha512" 00:04:28.341 ], 00:04:28.341 "dhchap_dhgroups": [ 00:04:28.341 "null", 00:04:28.341 "ffdhe2048", 00:04:28.341 "ffdhe3072", 00:04:28.341 "ffdhe4096", 00:04:28.341 "ffdhe6144", 00:04:28.341 "ffdhe8192" 00:04:28.341 ] 00:04:28.341 } 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "method": "bdev_nvme_set_hotplug", 00:04:28.341 "params": { 00:04:28.341 "period_us": 100000, 00:04:28.341 "enable": false 00:04:28.341 } 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "method": "bdev_wait_for_examine" 00:04:28.341 } 00:04:28.341 ] 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "subsystem": "scsi", 00:04:28.341 "config": null 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "subsystem": "scheduler", 00:04:28.341 "config": [ 00:04:28.341 { 00:04:28.341 "method": "framework_set_scheduler", 00:04:28.341 "params": { 00:04:28.341 "name": "static" 00:04:28.341 } 00:04:28.341 } 00:04:28.341 ] 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "subsystem": "vhost_scsi", 00:04:28.341 "config": [] 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "subsystem": "vhost_blk", 00:04:28.341 "config": [] 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "subsystem": "ublk", 00:04:28.341 "config": [] 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "subsystem": "nbd", 00:04:28.341 "config": [] 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "subsystem": "nvmf", 00:04:28.341 "config": [ 00:04:28.341 { 00:04:28.341 "method": "nvmf_set_config", 00:04:28.341 "params": { 00:04:28.341 "discovery_filter": "match_any", 00:04:28.341 "admin_cmd_passthru": { 00:04:28.341 "identify_ctrlr": false 00:04:28.341 }, 00:04:28.341 "dhchap_digests": [ 00:04:28.341 "sha256", 00:04:28.341 "sha384", 00:04:28.341 "sha512" 00:04:28.341 ], 00:04:28.341 "dhchap_dhgroups": [ 00:04:28.341 "null", 00:04:28.341 "ffdhe2048", 00:04:28.341 "ffdhe3072", 00:04:28.341 "ffdhe4096", 00:04:28.341 "ffdhe6144", 00:04:28.341 "ffdhe8192" 00:04:28.341 ] 00:04:28.341 } 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "method": "nvmf_set_max_subsystems", 00:04:28.341 "params": { 00:04:28.341 "max_subsystems": 1024 00:04:28.341 } 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "method": "nvmf_set_crdt", 00:04:28.341 "params": { 00:04:28.341 "crdt1": 0, 00:04:28.341 "crdt2": 0, 00:04:28.341 "crdt3": 0 00:04:28.341 } 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "method": "nvmf_create_transport", 00:04:28.341 "params": { 00:04:28.341 "trtype": "TCP", 00:04:28.341 "max_queue_depth": 128, 00:04:28.341 "max_io_qpairs_per_ctrlr": 127, 00:04:28.341 "in_capsule_data_size": 4096, 00:04:28.341 "max_io_size": 131072, 00:04:28.341 "io_unit_size": 131072, 00:04:28.341 "max_aq_depth": 128, 00:04:28.341 "num_shared_buffers": 511, 00:04:28.341 "buf_cache_size": 4294967295, 00:04:28.341 "dif_insert_or_strip": false, 00:04:28.341 "zcopy": false, 00:04:28.341 "c2h_success": true, 00:04:28.341 "sock_priority": 0, 00:04:28.341 "abort_timeout_sec": 1, 00:04:28.341 "ack_timeout": 0, 00:04:28.341 "data_wr_pool_size": 0 00:04:28.341 } 00:04:28.341 } 00:04:28.341 ] 00:04:28.341 }, 00:04:28.341 { 00:04:28.341 "subsystem": "iscsi", 00:04:28.341 "config": [ 00:04:28.341 { 00:04:28.341 "method": "iscsi_set_options", 00:04:28.341 "params": { 00:04:28.341 "node_base": "iqn.2016-06.io.spdk", 00:04:28.341 "max_sessions": 128, 00:04:28.341 "max_connections_per_session": 2, 00:04:28.341 "max_queue_depth": 64, 00:04:28.341 "default_time2wait": 2, 00:04:28.341 "default_time2retain": 20, 00:04:28.341 "first_burst_length": 8192, 00:04:28.341 "immediate_data": true, 00:04:28.341 "allow_duplicated_isid": false, 00:04:28.341 "error_recovery_level": 0, 00:04:28.341 "nop_timeout": 60, 00:04:28.341 "nop_in_interval": 30, 00:04:28.341 "disable_chap": false, 00:04:28.341 "require_chap": false, 00:04:28.341 "mutual_chap": false, 00:04:28.341 "chap_group": 0, 00:04:28.341 "max_large_datain_per_connection": 64, 00:04:28.341 "max_r2t_per_connection": 4, 00:04:28.341 "pdu_pool_size": 36864, 00:04:28.341 "immediate_data_pool_size": 16384, 00:04:28.341 "data_out_pool_size": 2048 00:04:28.341 } 00:04:28.341 } 00:04:28.341 ] 00:04:28.341 } 00:04:28.341 ] 00:04:28.341 } 00:04:28.341 08:41:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:28.341 08:41:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57167 00:04:28.341 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57167 ']' 00:04:28.341 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57167 00:04:28.341 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:28.341 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:28.341 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57167 00:04:28.341 killing process with pid 57167 00:04:28.341 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:28.341 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:28.341 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57167' 00:04:28.341 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57167 00:04:28.341 08:41:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57167 00:04:30.884 08:41:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57223 00:04:30.884 08:41:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:30.884 08:41:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:36.165 08:41:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57223 00:04:36.165 08:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57223 ']' 00:04:36.165 08:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57223 00:04:36.165 08:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:36.165 08:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.165 08:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57223 00:04:36.165 08:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.165 08:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.165 killing process with pid 57223 00:04:36.165 08:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57223' 00:04:36.165 08:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57223 00:04:36.165 08:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57223 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.703 00:04:38.703 real 0m12.011s 00:04:38.703 user 0m11.124s 00:04:38.703 sys 0m1.156s 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.703 ************************************ 00:04:38.703 END TEST skip_rpc_with_json 00:04:38.703 ************************************ 00:04:38.703 08:41:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:38.703 08:41:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.703 08:41:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.703 08:41:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.703 ************************************ 00:04:38.703 START TEST skip_rpc_with_delay 00:04:38.703 ************************************ 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:38.703 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.964 [2024-10-05 08:41:15.206879] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:38.964 [2024-10-05 08:41:15.207020] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:38.964 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:38.964 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:38.964 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:38.964 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:38.964 00:04:38.964 real 0m0.183s 00:04:38.964 user 0m0.090s 00:04:38.964 sys 0m0.092s 00:04:38.964 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.964 08:41:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:38.964 ************************************ 00:04:38.964 END TEST skip_rpc_with_delay 00:04:38.964 ************************************ 00:04:38.964 08:41:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:38.964 08:41:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:38.964 08:41:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:38.964 08:41:15 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.964 08:41:15 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.964 08:41:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.964 ************************************ 00:04:38.964 START TEST exit_on_failed_rpc_init 00:04:38.964 ************************************ 00:04:38.964 08:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:38.964 08:41:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57362 00:04:38.964 08:41:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.964 08:41:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57362 00:04:38.964 08:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57362 ']' 00:04:38.964 08:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.964 08:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.964 08:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.964 08:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.964 08:41:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.225 [2024-10-05 08:41:15.448583] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:39.225 [2024-10-05 08:41:15.448696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57362 ] 00:04:39.225 [2024-10-05 08:41:15.612337] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.485 [2024-10-05 08:41:15.852483] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:40.425 08:41:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:40.685 [2024-10-05 08:41:16.946285] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:40.685 [2024-10-05 08:41:16.946398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57380 ] 00:04:40.685 [2024-10-05 08:41:17.108679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.944 [2024-10-05 08:41:17.314838] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.944 [2024-10-05 08:41:17.314919] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:40.944 [2024-10-05 08:41:17.314931] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:40.944 [2024-10-05 08:41:17.314941] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57362 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57362 ']' 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57362 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57362 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.514 killing process with pid 57362 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57362' 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57362 00:04:41.514 08:41:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57362 00:04:44.060 00:04:44.060 real 0m5.072s 00:04:44.060 user 0m5.401s 00:04:44.060 sys 0m0.771s 00:04:44.060 08:41:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.060 08:41:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.060 ************************************ 00:04:44.060 END TEST exit_on_failed_rpc_init 00:04:44.060 ************************************ 00:04:44.060 08:41:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:44.060 00:04:44.060 real 0m25.453s 00:04:44.060 user 0m23.862s 00:04:44.060 sys 0m2.893s 00:04:44.060 08:41:20 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.060 08:41:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.060 ************************************ 00:04:44.060 END TEST skip_rpc 00:04:44.060 ************************************ 00:04:44.060 08:41:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:44.060 08:41:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.320 08:41:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.320 08:41:20 -- common/autotest_common.sh@10 -- # set +x 00:04:44.320 ************************************ 00:04:44.320 START TEST rpc_client 00:04:44.320 ************************************ 00:04:44.320 08:41:20 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:44.320 * Looking for test storage... 00:04:44.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:44.320 08:41:20 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:44.320 08:41:20 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:44.320 08:41:20 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:44.320 08:41:20 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.320 08:41:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:44.320 08:41:20 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.320 08:41:20 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:44.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.320 --rc genhtml_branch_coverage=1 00:04:44.320 --rc genhtml_function_coverage=1 00:04:44.320 --rc genhtml_legend=1 00:04:44.320 --rc geninfo_all_blocks=1 00:04:44.320 --rc geninfo_unexecuted_blocks=1 00:04:44.320 00:04:44.320 ' 00:04:44.320 08:41:20 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:44.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.320 --rc genhtml_branch_coverage=1 00:04:44.320 --rc genhtml_function_coverage=1 00:04:44.320 --rc genhtml_legend=1 00:04:44.320 --rc geninfo_all_blocks=1 00:04:44.320 --rc geninfo_unexecuted_blocks=1 00:04:44.320 00:04:44.320 ' 00:04:44.320 08:41:20 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:44.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.320 --rc genhtml_branch_coverage=1 00:04:44.320 --rc genhtml_function_coverage=1 00:04:44.320 --rc genhtml_legend=1 00:04:44.320 --rc geninfo_all_blocks=1 00:04:44.320 --rc geninfo_unexecuted_blocks=1 00:04:44.320 00:04:44.320 ' 00:04:44.320 08:41:20 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:44.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.320 --rc genhtml_branch_coverage=1 00:04:44.320 --rc genhtml_function_coverage=1 00:04:44.320 --rc genhtml_legend=1 00:04:44.320 --rc geninfo_all_blocks=1 00:04:44.320 --rc geninfo_unexecuted_blocks=1 00:04:44.320 00:04:44.320 ' 00:04:44.320 08:41:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:44.580 OK 00:04:44.580 08:41:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:44.580 00:04:44.580 real 0m0.285s 00:04:44.580 user 0m0.154s 00:04:44.580 sys 0m0.148s 00:04:44.580 08:41:20 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.580 08:41:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:44.580 ************************************ 00:04:44.580 END TEST rpc_client 00:04:44.580 ************************************ 00:04:44.580 08:41:20 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:44.580 08:41:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.580 08:41:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.580 08:41:20 -- common/autotest_common.sh@10 -- # set +x 00:04:44.580 ************************************ 00:04:44.580 START TEST json_config 00:04:44.580 ************************************ 00:04:44.580 08:41:20 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:44.580 08:41:20 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:44.580 08:41:20 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:44.580 08:41:20 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:44.580 08:41:21 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:44.580 08:41:21 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.580 08:41:21 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.841 08:41:21 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.841 08:41:21 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.841 08:41:21 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.841 08:41:21 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.841 08:41:21 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.841 08:41:21 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.841 08:41:21 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.841 08:41:21 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.841 08:41:21 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.841 08:41:21 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:44.841 08:41:21 json_config -- scripts/common.sh@345 -- # : 1 00:04:44.841 08:41:21 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.841 08:41:21 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.841 08:41:21 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:44.841 08:41:21 json_config -- scripts/common.sh@353 -- # local d=1 00:04:44.841 08:41:21 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.841 08:41:21 json_config -- scripts/common.sh@355 -- # echo 1 00:04:44.841 08:41:21 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.841 08:41:21 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:44.841 08:41:21 json_config -- scripts/common.sh@353 -- # local d=2 00:04:44.841 08:41:21 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.841 08:41:21 json_config -- scripts/common.sh@355 -- # echo 2 00:04:44.841 08:41:21 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.841 08:41:21 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.841 08:41:21 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.841 08:41:21 json_config -- scripts/common.sh@368 -- # return 0 00:04:44.841 08:41:21 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.841 08:41:21 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:44.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.841 --rc genhtml_branch_coverage=1 00:04:44.841 --rc genhtml_function_coverage=1 00:04:44.841 --rc genhtml_legend=1 00:04:44.841 --rc geninfo_all_blocks=1 00:04:44.841 --rc geninfo_unexecuted_blocks=1 00:04:44.841 00:04:44.841 ' 00:04:44.841 08:41:21 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:44.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.841 --rc genhtml_branch_coverage=1 00:04:44.841 --rc genhtml_function_coverage=1 00:04:44.841 --rc genhtml_legend=1 00:04:44.841 --rc geninfo_all_blocks=1 00:04:44.841 --rc geninfo_unexecuted_blocks=1 00:04:44.841 00:04:44.841 ' 00:04:44.841 08:41:21 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:44.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.841 --rc genhtml_branch_coverage=1 00:04:44.841 --rc genhtml_function_coverage=1 00:04:44.841 --rc genhtml_legend=1 00:04:44.841 --rc geninfo_all_blocks=1 00:04:44.841 --rc geninfo_unexecuted_blocks=1 00:04:44.841 00:04:44.841 ' 00:04:44.841 08:41:21 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:44.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.841 --rc genhtml_branch_coverage=1 00:04:44.841 --rc genhtml_function_coverage=1 00:04:44.841 --rc genhtml_legend=1 00:04:44.841 --rc geninfo_all_blocks=1 00:04:44.841 --rc geninfo_unexecuted_blocks=1 00:04:44.841 00:04:44.841 ' 00:04:44.841 08:41:21 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:45fb7a37-c69d-4288-ba7a-a90b847fc105 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=45fb7a37-c69d-4288-ba7a-a90b847fc105 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.841 08:41:21 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.841 08:41:21 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.841 08:41:21 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.841 08:41:21 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.841 08:41:21 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.841 08:41:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.841 08:41:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.842 08:41:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.842 08:41:21 json_config -- paths/export.sh@5 -- # export PATH 00:04:44.842 08:41:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.842 08:41:21 json_config -- nvmf/common.sh@51 -- # : 0 00:04:44.842 08:41:21 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:44.842 08:41:21 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:44.842 08:41:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:44.842 08:41:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.842 08:41:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.842 08:41:21 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:44.842 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:44.842 08:41:21 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:44.842 08:41:21 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:44.842 08:41:21 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:44.842 08:41:21 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:44.842 08:41:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:44.842 08:41:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:44.842 08:41:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:44.842 08:41:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:44.842 WARNING: No tests are enabled so not running JSON configuration tests 00:04:44.842 08:41:21 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:44.842 08:41:21 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:44.842 00:04:44.842 real 0m0.229s 00:04:44.842 user 0m0.136s 00:04:44.842 sys 0m0.102s 00:04:44.842 08:41:21 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.842 08:41:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.842 ************************************ 00:04:44.842 END TEST json_config 00:04:44.842 ************************************ 00:04:44.842 08:41:21 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.842 08:41:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.842 08:41:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.842 08:41:21 -- common/autotest_common.sh@10 -- # set +x 00:04:44.842 ************************************ 00:04:44.842 START TEST json_config_extra_key 00:04:44.842 ************************************ 00:04:44.842 08:41:21 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.842 08:41:21 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:44.842 08:41:21 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:44.842 08:41:21 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:45.103 08:41:21 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:45.103 08:41:21 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.103 08:41:21 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:45.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.103 --rc genhtml_branch_coverage=1 00:04:45.103 --rc genhtml_function_coverage=1 00:04:45.103 --rc genhtml_legend=1 00:04:45.103 --rc geninfo_all_blocks=1 00:04:45.103 --rc geninfo_unexecuted_blocks=1 00:04:45.103 00:04:45.103 ' 00:04:45.103 08:41:21 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:45.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.103 --rc genhtml_branch_coverage=1 00:04:45.103 --rc genhtml_function_coverage=1 00:04:45.103 --rc genhtml_legend=1 00:04:45.103 --rc geninfo_all_blocks=1 00:04:45.103 --rc geninfo_unexecuted_blocks=1 00:04:45.103 00:04:45.103 ' 00:04:45.103 08:41:21 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:45.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.103 --rc genhtml_branch_coverage=1 00:04:45.103 --rc genhtml_function_coverage=1 00:04:45.103 --rc genhtml_legend=1 00:04:45.103 --rc geninfo_all_blocks=1 00:04:45.103 --rc geninfo_unexecuted_blocks=1 00:04:45.103 00:04:45.103 ' 00:04:45.103 08:41:21 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:45.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.103 --rc genhtml_branch_coverage=1 00:04:45.103 --rc genhtml_function_coverage=1 00:04:45.103 --rc genhtml_legend=1 00:04:45.103 --rc geninfo_all_blocks=1 00:04:45.103 --rc geninfo_unexecuted_blocks=1 00:04:45.103 00:04:45.103 ' 00:04:45.103 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:45fb7a37-c69d-4288-ba7a-a90b847fc105 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=45fb7a37-c69d-4288-ba7a-a90b847fc105 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.103 08:41:21 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.103 08:41:21 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.103 08:41:21 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.103 08:41:21 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.103 08:41:21 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:45.103 08:41:21 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.103 08:41:21 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:45.103 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:45.104 08:41:21 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:45.104 08:41:21 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:45.104 08:41:21 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:45.104 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:45.104 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:45.104 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:45.104 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:45.104 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:45.104 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:45.104 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:45.104 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:45.104 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:45.104 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.104 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:45.104 INFO: launching applications... 00:04:45.104 08:41:21 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:45.104 08:41:21 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:45.104 08:41:21 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:45.104 08:41:21 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.104 08:41:21 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.104 08:41:21 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.104 08:41:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.104 08:41:21 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.104 08:41:21 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57596 00:04:45.104 08:41:21 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.104 Waiting for target to run... 00:04:45.104 08:41:21 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57596 /var/tmp/spdk_tgt.sock 00:04:45.104 08:41:21 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57596 ']' 00:04:45.104 08:41:21 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:45.104 08:41:21 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.104 08:41:21 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.104 08:41:21 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.104 08:41:21 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.104 08:41:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.104 [2024-10-05 08:41:21.498017] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:45.104 [2024-10-05 08:41:21.498130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57596 ] 00:04:45.675 [2024-10-05 08:41:21.864555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.675 [2024-10-05 08:41:22.075909] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.616 00:04:46.616 INFO: shutting down applications... 00:04:46.616 08:41:22 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.616 08:41:22 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:46.616 08:41:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:46.616 08:41:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:46.616 08:41:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:46.616 08:41:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:46.616 08:41:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:46.616 08:41:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57596 ]] 00:04:46.616 08:41:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57596 00:04:46.616 08:41:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:46.616 08:41:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.616 08:41:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:04:46.616 08:41:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.877 08:41:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.877 08:41:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.877 08:41:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:04:46.877 08:41:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.447 08:41:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.447 08:41:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.447 08:41:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:04:47.447 08:41:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.017 08:41:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.017 08:41:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.017 08:41:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:04:48.017 08:41:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:48.587 08:41:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:48.587 08:41:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.587 08:41:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:04:48.587 08:41:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.158 08:41:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.158 08:41:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.158 08:41:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:04:49.158 08:41:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.417 08:41:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.417 08:41:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.417 08:41:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57596 00:04:49.417 08:41:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:49.417 08:41:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:49.417 SPDK target shutdown done 00:04:49.417 08:41:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:49.417 08:41:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:49.417 08:41:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:49.417 Success 00:04:49.417 00:04:49.417 real 0m4.663s 00:04:49.417 user 0m4.308s 00:04:49.417 sys 0m0.571s 00:04:49.417 08:41:25 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.417 08:41:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:49.417 ************************************ 00:04:49.417 END TEST json_config_extra_key 00:04:49.417 ************************************ 00:04:49.678 08:41:25 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:49.678 08:41:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.678 08:41:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.678 08:41:25 -- common/autotest_common.sh@10 -- # set +x 00:04:49.678 ************************************ 00:04:49.678 START TEST alias_rpc 00:04:49.678 ************************************ 00:04:49.678 08:41:25 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:49.678 * Looking for test storage... 00:04:49.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:49.678 08:41:26 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:49.678 08:41:26 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:49.678 08:41:26 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:49.678 08:41:26 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.678 08:41:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:49.678 08:41:26 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.678 08:41:26 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:49.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.678 --rc genhtml_branch_coverage=1 00:04:49.678 --rc genhtml_function_coverage=1 00:04:49.678 --rc genhtml_legend=1 00:04:49.678 --rc geninfo_all_blocks=1 00:04:49.678 --rc geninfo_unexecuted_blocks=1 00:04:49.678 00:04:49.678 ' 00:04:49.678 08:41:26 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:49.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.678 --rc genhtml_branch_coverage=1 00:04:49.678 --rc genhtml_function_coverage=1 00:04:49.678 --rc genhtml_legend=1 00:04:49.678 --rc geninfo_all_blocks=1 00:04:49.678 --rc geninfo_unexecuted_blocks=1 00:04:49.678 00:04:49.678 ' 00:04:49.678 08:41:26 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:49.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.678 --rc genhtml_branch_coverage=1 00:04:49.678 --rc genhtml_function_coverage=1 00:04:49.678 --rc genhtml_legend=1 00:04:49.678 --rc geninfo_all_blocks=1 00:04:49.678 --rc geninfo_unexecuted_blocks=1 00:04:49.678 00:04:49.678 ' 00:04:49.678 08:41:26 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:49.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.678 --rc genhtml_branch_coverage=1 00:04:49.678 --rc genhtml_function_coverage=1 00:04:49.678 --rc genhtml_legend=1 00:04:49.678 --rc geninfo_all_blocks=1 00:04:49.678 --rc geninfo_unexecuted_blocks=1 00:04:49.678 00:04:49.678 ' 00:04:49.678 08:41:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:49.678 08:41:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57707 00:04:49.678 08:41:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.678 08:41:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57707 00:04:49.678 08:41:26 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57707 ']' 00:04:49.678 08:41:26 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.679 08:41:26 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.679 08:41:26 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.679 08:41:26 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.679 08:41:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.939 [2024-10-05 08:41:26.229598] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:49.939 [2024-10-05 08:41:26.229819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57707 ] 00:04:49.939 [2024-10-05 08:41:26.392739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.199 [2024-10-05 08:41:26.624119] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.145 08:41:27 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.145 08:41:27 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:51.145 08:41:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:51.420 08:41:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57707 00:04:51.420 08:41:27 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57707 ']' 00:04:51.420 08:41:27 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57707 00:04:51.420 08:41:27 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:51.420 08:41:27 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:51.420 08:41:27 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57707 00:04:51.420 killing process with pid 57707 00:04:51.420 08:41:27 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:51.420 08:41:27 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:51.420 08:41:27 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57707' 00:04:51.420 08:41:27 alias_rpc -- common/autotest_common.sh@969 -- # kill 57707 00:04:51.420 08:41:27 alias_rpc -- common/autotest_common.sh@974 -- # wait 57707 00:04:54.714 ************************************ 00:04:54.714 END TEST alias_rpc 00:04:54.714 ************************************ 00:04:54.714 00:04:54.714 real 0m4.608s 00:04:54.714 user 0m4.395s 00:04:54.714 sys 0m0.742s 00:04:54.714 08:41:30 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.714 08:41:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.714 08:41:30 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:54.714 08:41:30 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.714 08:41:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.714 08:41:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.714 08:41:30 -- common/autotest_common.sh@10 -- # set +x 00:04:54.714 ************************************ 00:04:54.714 START TEST spdkcli_tcp 00:04:54.714 ************************************ 00:04:54.714 08:41:30 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.714 * Looking for test storage... 00:04:54.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:54.714 08:41:30 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:54.714 08:41:30 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:54.714 08:41:30 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:54.714 08:41:30 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.714 08:41:30 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:54.714 08:41:30 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.714 08:41:30 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:54.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.714 --rc genhtml_branch_coverage=1 00:04:54.714 --rc genhtml_function_coverage=1 00:04:54.714 --rc genhtml_legend=1 00:04:54.714 --rc geninfo_all_blocks=1 00:04:54.714 --rc geninfo_unexecuted_blocks=1 00:04:54.714 00:04:54.714 ' 00:04:54.714 08:41:30 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:54.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.714 --rc genhtml_branch_coverage=1 00:04:54.714 --rc genhtml_function_coverage=1 00:04:54.714 --rc genhtml_legend=1 00:04:54.714 --rc geninfo_all_blocks=1 00:04:54.714 --rc geninfo_unexecuted_blocks=1 00:04:54.714 00:04:54.714 ' 00:04:54.714 08:41:30 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:54.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.714 --rc genhtml_branch_coverage=1 00:04:54.714 --rc genhtml_function_coverage=1 00:04:54.714 --rc genhtml_legend=1 00:04:54.714 --rc geninfo_all_blocks=1 00:04:54.714 --rc geninfo_unexecuted_blocks=1 00:04:54.714 00:04:54.714 ' 00:04:54.714 08:41:30 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:54.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.714 --rc genhtml_branch_coverage=1 00:04:54.714 --rc genhtml_function_coverage=1 00:04:54.714 --rc genhtml_legend=1 00:04:54.714 --rc geninfo_all_blocks=1 00:04:54.714 --rc geninfo_unexecuted_blocks=1 00:04:54.714 00:04:54.714 ' 00:04:54.714 08:41:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:54.714 08:41:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:54.714 08:41:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:54.714 08:41:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:54.714 08:41:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:54.715 08:41:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:54.715 08:41:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:54.715 08:41:30 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.715 08:41:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.715 08:41:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:54.715 08:41:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57820 00:04:54.715 08:41:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57820 00:04:54.715 08:41:30 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57820 ']' 00:04:54.715 08:41:30 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.715 08:41:30 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:54.715 08:41:30 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.715 08:41:30 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:54.715 08:41:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.715 [2024-10-05 08:41:30.910548] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:54.715 [2024-10-05 08:41:30.910769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57820 ] 00:04:54.715 [2024-10-05 08:41:31.073765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.975 [2024-10-05 08:41:31.322098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.975 [2024-10-05 08:41:31.322148] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.913 08:41:32 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.913 08:41:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:55.913 08:41:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57842 00:04:55.914 08:41:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:55.914 08:41:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:56.174 [ 00:04:56.174 "bdev_malloc_delete", 00:04:56.174 "bdev_malloc_create", 00:04:56.174 "bdev_null_resize", 00:04:56.174 "bdev_null_delete", 00:04:56.174 "bdev_null_create", 00:04:56.174 "bdev_nvme_cuse_unregister", 00:04:56.174 "bdev_nvme_cuse_register", 00:04:56.174 "bdev_opal_new_user", 00:04:56.174 "bdev_opal_set_lock_state", 00:04:56.174 "bdev_opal_delete", 00:04:56.174 "bdev_opal_get_info", 00:04:56.174 "bdev_opal_create", 00:04:56.174 "bdev_nvme_opal_revert", 00:04:56.174 "bdev_nvme_opal_init", 00:04:56.174 "bdev_nvme_send_cmd", 00:04:56.174 "bdev_nvme_set_keys", 00:04:56.174 "bdev_nvme_get_path_iostat", 00:04:56.174 "bdev_nvme_get_mdns_discovery_info", 00:04:56.174 "bdev_nvme_stop_mdns_discovery", 00:04:56.174 "bdev_nvme_start_mdns_discovery", 00:04:56.174 "bdev_nvme_set_multipath_policy", 00:04:56.174 "bdev_nvme_set_preferred_path", 00:04:56.174 "bdev_nvme_get_io_paths", 00:04:56.174 "bdev_nvme_remove_error_injection", 00:04:56.174 "bdev_nvme_add_error_injection", 00:04:56.174 "bdev_nvme_get_discovery_info", 00:04:56.174 "bdev_nvme_stop_discovery", 00:04:56.174 "bdev_nvme_start_discovery", 00:04:56.174 "bdev_nvme_get_controller_health_info", 00:04:56.174 "bdev_nvme_disable_controller", 00:04:56.174 "bdev_nvme_enable_controller", 00:04:56.174 "bdev_nvme_reset_controller", 00:04:56.174 "bdev_nvme_get_transport_statistics", 00:04:56.174 "bdev_nvme_apply_firmware", 00:04:56.174 "bdev_nvme_detach_controller", 00:04:56.174 "bdev_nvme_get_controllers", 00:04:56.174 "bdev_nvme_attach_controller", 00:04:56.174 "bdev_nvme_set_hotplug", 00:04:56.174 "bdev_nvme_set_options", 00:04:56.174 "bdev_passthru_delete", 00:04:56.174 "bdev_passthru_create", 00:04:56.174 "bdev_lvol_set_parent_bdev", 00:04:56.174 "bdev_lvol_set_parent", 00:04:56.174 "bdev_lvol_check_shallow_copy", 00:04:56.174 "bdev_lvol_start_shallow_copy", 00:04:56.174 "bdev_lvol_grow_lvstore", 00:04:56.174 "bdev_lvol_get_lvols", 00:04:56.174 "bdev_lvol_get_lvstores", 00:04:56.174 "bdev_lvol_delete", 00:04:56.174 "bdev_lvol_set_read_only", 00:04:56.174 "bdev_lvol_resize", 00:04:56.174 "bdev_lvol_decouple_parent", 00:04:56.174 "bdev_lvol_inflate", 00:04:56.174 "bdev_lvol_rename", 00:04:56.174 "bdev_lvol_clone_bdev", 00:04:56.174 "bdev_lvol_clone", 00:04:56.174 "bdev_lvol_snapshot", 00:04:56.174 "bdev_lvol_create", 00:04:56.174 "bdev_lvol_delete_lvstore", 00:04:56.174 "bdev_lvol_rename_lvstore", 00:04:56.174 "bdev_lvol_create_lvstore", 00:04:56.174 "bdev_raid_set_options", 00:04:56.174 "bdev_raid_remove_base_bdev", 00:04:56.174 "bdev_raid_add_base_bdev", 00:04:56.174 "bdev_raid_delete", 00:04:56.174 "bdev_raid_create", 00:04:56.174 "bdev_raid_get_bdevs", 00:04:56.174 "bdev_error_inject_error", 00:04:56.174 "bdev_error_delete", 00:04:56.174 "bdev_error_create", 00:04:56.174 "bdev_split_delete", 00:04:56.174 "bdev_split_create", 00:04:56.174 "bdev_delay_delete", 00:04:56.174 "bdev_delay_create", 00:04:56.174 "bdev_delay_update_latency", 00:04:56.174 "bdev_zone_block_delete", 00:04:56.174 "bdev_zone_block_create", 00:04:56.174 "blobfs_create", 00:04:56.174 "blobfs_detect", 00:04:56.174 "blobfs_set_cache_size", 00:04:56.174 "bdev_aio_delete", 00:04:56.174 "bdev_aio_rescan", 00:04:56.174 "bdev_aio_create", 00:04:56.174 "bdev_ftl_set_property", 00:04:56.174 "bdev_ftl_get_properties", 00:04:56.174 "bdev_ftl_get_stats", 00:04:56.174 "bdev_ftl_unmap", 00:04:56.174 "bdev_ftl_unload", 00:04:56.174 "bdev_ftl_delete", 00:04:56.174 "bdev_ftl_load", 00:04:56.174 "bdev_ftl_create", 00:04:56.174 "bdev_virtio_attach_controller", 00:04:56.174 "bdev_virtio_scsi_get_devices", 00:04:56.174 "bdev_virtio_detach_controller", 00:04:56.174 "bdev_virtio_blk_set_hotplug", 00:04:56.174 "bdev_iscsi_delete", 00:04:56.174 "bdev_iscsi_create", 00:04:56.174 "bdev_iscsi_set_options", 00:04:56.174 "accel_error_inject_error", 00:04:56.174 "ioat_scan_accel_module", 00:04:56.174 "dsa_scan_accel_module", 00:04:56.174 "iaa_scan_accel_module", 00:04:56.174 "keyring_file_remove_key", 00:04:56.174 "keyring_file_add_key", 00:04:56.174 "keyring_linux_set_options", 00:04:56.174 "fsdev_aio_delete", 00:04:56.174 "fsdev_aio_create", 00:04:56.174 "iscsi_get_histogram", 00:04:56.174 "iscsi_enable_histogram", 00:04:56.174 "iscsi_set_options", 00:04:56.174 "iscsi_get_auth_groups", 00:04:56.174 "iscsi_auth_group_remove_secret", 00:04:56.174 "iscsi_auth_group_add_secret", 00:04:56.174 "iscsi_delete_auth_group", 00:04:56.174 "iscsi_create_auth_group", 00:04:56.174 "iscsi_set_discovery_auth", 00:04:56.174 "iscsi_get_options", 00:04:56.174 "iscsi_target_node_request_logout", 00:04:56.174 "iscsi_target_node_set_redirect", 00:04:56.174 "iscsi_target_node_set_auth", 00:04:56.174 "iscsi_target_node_add_lun", 00:04:56.174 "iscsi_get_stats", 00:04:56.174 "iscsi_get_connections", 00:04:56.174 "iscsi_portal_group_set_auth", 00:04:56.174 "iscsi_start_portal_group", 00:04:56.174 "iscsi_delete_portal_group", 00:04:56.174 "iscsi_create_portal_group", 00:04:56.174 "iscsi_get_portal_groups", 00:04:56.174 "iscsi_delete_target_node", 00:04:56.174 "iscsi_target_node_remove_pg_ig_maps", 00:04:56.174 "iscsi_target_node_add_pg_ig_maps", 00:04:56.174 "iscsi_create_target_node", 00:04:56.174 "iscsi_get_target_nodes", 00:04:56.174 "iscsi_delete_initiator_group", 00:04:56.174 "iscsi_initiator_group_remove_initiators", 00:04:56.174 "iscsi_initiator_group_add_initiators", 00:04:56.174 "iscsi_create_initiator_group", 00:04:56.174 "iscsi_get_initiator_groups", 00:04:56.174 "nvmf_set_crdt", 00:04:56.174 "nvmf_set_config", 00:04:56.174 "nvmf_set_max_subsystems", 00:04:56.174 "nvmf_stop_mdns_prr", 00:04:56.174 "nvmf_publish_mdns_prr", 00:04:56.174 "nvmf_subsystem_get_listeners", 00:04:56.174 "nvmf_subsystem_get_qpairs", 00:04:56.174 "nvmf_subsystem_get_controllers", 00:04:56.174 "nvmf_get_stats", 00:04:56.174 "nvmf_get_transports", 00:04:56.174 "nvmf_create_transport", 00:04:56.174 "nvmf_get_targets", 00:04:56.174 "nvmf_delete_target", 00:04:56.174 "nvmf_create_target", 00:04:56.174 "nvmf_subsystem_allow_any_host", 00:04:56.174 "nvmf_subsystem_set_keys", 00:04:56.174 "nvmf_subsystem_remove_host", 00:04:56.174 "nvmf_subsystem_add_host", 00:04:56.174 "nvmf_ns_remove_host", 00:04:56.174 "nvmf_ns_add_host", 00:04:56.174 "nvmf_subsystem_remove_ns", 00:04:56.174 "nvmf_subsystem_set_ns_ana_group", 00:04:56.174 "nvmf_subsystem_add_ns", 00:04:56.174 "nvmf_subsystem_listener_set_ana_state", 00:04:56.174 "nvmf_discovery_get_referrals", 00:04:56.174 "nvmf_discovery_remove_referral", 00:04:56.174 "nvmf_discovery_add_referral", 00:04:56.174 "nvmf_subsystem_remove_listener", 00:04:56.174 "nvmf_subsystem_add_listener", 00:04:56.174 "nvmf_delete_subsystem", 00:04:56.174 "nvmf_create_subsystem", 00:04:56.174 "nvmf_get_subsystems", 00:04:56.174 "env_dpdk_get_mem_stats", 00:04:56.174 "nbd_get_disks", 00:04:56.174 "nbd_stop_disk", 00:04:56.174 "nbd_start_disk", 00:04:56.174 "ublk_recover_disk", 00:04:56.175 "ublk_get_disks", 00:04:56.175 "ublk_stop_disk", 00:04:56.175 "ublk_start_disk", 00:04:56.175 "ublk_destroy_target", 00:04:56.175 "ublk_create_target", 00:04:56.175 "virtio_blk_create_transport", 00:04:56.175 "virtio_blk_get_transports", 00:04:56.175 "vhost_controller_set_coalescing", 00:04:56.175 "vhost_get_controllers", 00:04:56.175 "vhost_delete_controller", 00:04:56.175 "vhost_create_blk_controller", 00:04:56.175 "vhost_scsi_controller_remove_target", 00:04:56.175 "vhost_scsi_controller_add_target", 00:04:56.175 "vhost_start_scsi_controller", 00:04:56.175 "vhost_create_scsi_controller", 00:04:56.175 "thread_set_cpumask", 00:04:56.175 "scheduler_set_options", 00:04:56.175 "framework_get_governor", 00:04:56.175 "framework_get_scheduler", 00:04:56.175 "framework_set_scheduler", 00:04:56.175 "framework_get_reactors", 00:04:56.175 "thread_get_io_channels", 00:04:56.175 "thread_get_pollers", 00:04:56.175 "thread_get_stats", 00:04:56.175 "framework_monitor_context_switch", 00:04:56.175 "spdk_kill_instance", 00:04:56.175 "log_enable_timestamps", 00:04:56.175 "log_get_flags", 00:04:56.175 "log_clear_flag", 00:04:56.175 "log_set_flag", 00:04:56.175 "log_get_level", 00:04:56.175 "log_set_level", 00:04:56.175 "log_get_print_level", 00:04:56.175 "log_set_print_level", 00:04:56.175 "framework_enable_cpumask_locks", 00:04:56.175 "framework_disable_cpumask_locks", 00:04:56.175 "framework_wait_init", 00:04:56.175 "framework_start_init", 00:04:56.175 "scsi_get_devices", 00:04:56.175 "bdev_get_histogram", 00:04:56.175 "bdev_enable_histogram", 00:04:56.175 "bdev_set_qos_limit", 00:04:56.175 "bdev_set_qd_sampling_period", 00:04:56.175 "bdev_get_bdevs", 00:04:56.175 "bdev_reset_iostat", 00:04:56.175 "bdev_get_iostat", 00:04:56.175 "bdev_examine", 00:04:56.175 "bdev_wait_for_examine", 00:04:56.175 "bdev_set_options", 00:04:56.175 "accel_get_stats", 00:04:56.175 "accel_set_options", 00:04:56.175 "accel_set_driver", 00:04:56.175 "accel_crypto_key_destroy", 00:04:56.175 "accel_crypto_keys_get", 00:04:56.175 "accel_crypto_key_create", 00:04:56.175 "accel_assign_opc", 00:04:56.175 "accel_get_module_info", 00:04:56.175 "accel_get_opc_assignments", 00:04:56.175 "vmd_rescan", 00:04:56.175 "vmd_remove_device", 00:04:56.175 "vmd_enable", 00:04:56.175 "sock_get_default_impl", 00:04:56.175 "sock_set_default_impl", 00:04:56.175 "sock_impl_set_options", 00:04:56.175 "sock_impl_get_options", 00:04:56.175 "iobuf_get_stats", 00:04:56.175 "iobuf_set_options", 00:04:56.175 "keyring_get_keys", 00:04:56.175 "framework_get_pci_devices", 00:04:56.175 "framework_get_config", 00:04:56.175 "framework_get_subsystems", 00:04:56.175 "fsdev_set_opts", 00:04:56.175 "fsdev_get_opts", 00:04:56.175 "trace_get_info", 00:04:56.175 "trace_get_tpoint_group_mask", 00:04:56.175 "trace_disable_tpoint_group", 00:04:56.175 "trace_enable_tpoint_group", 00:04:56.175 "trace_clear_tpoint_mask", 00:04:56.175 "trace_set_tpoint_mask", 00:04:56.175 "notify_get_notifications", 00:04:56.175 "notify_get_types", 00:04:56.175 "spdk_get_version", 00:04:56.175 "rpc_get_methods" 00:04:56.175 ] 00:04:56.175 08:41:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:56.175 08:41:32 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.175 08:41:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.175 08:41:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:56.175 08:41:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57820 00:04:56.175 08:41:32 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57820 ']' 00:04:56.175 08:41:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57820 00:04:56.175 08:41:32 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:56.175 08:41:32 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:56.175 08:41:32 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57820 00:04:56.175 killing process with pid 57820 00:04:56.175 08:41:32 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:56.175 08:41:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:56.175 08:41:32 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57820' 00:04:56.175 08:41:32 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57820 00:04:56.175 08:41:32 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57820 00:04:59.471 ************************************ 00:04:59.471 END TEST spdkcli_tcp 00:04:59.471 ************************************ 00:04:59.471 00:04:59.471 real 0m4.677s 00:04:59.471 user 0m7.917s 00:04:59.471 sys 0m0.825s 00:04:59.471 08:41:35 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.471 08:41:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.471 08:41:35 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.471 08:41:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.471 08:41:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.471 08:41:35 -- common/autotest_common.sh@10 -- # set +x 00:04:59.471 ************************************ 00:04:59.471 START TEST dpdk_mem_utility 00:04:59.471 ************************************ 00:04:59.471 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:59.471 * Looking for test storage... 00:04:59.471 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:59.471 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:59.471 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:59.471 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:59.471 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.471 08:41:35 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:59.471 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.471 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:59.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.471 --rc genhtml_branch_coverage=1 00:04:59.471 --rc genhtml_function_coverage=1 00:04:59.471 --rc genhtml_legend=1 00:04:59.471 --rc geninfo_all_blocks=1 00:04:59.471 --rc geninfo_unexecuted_blocks=1 00:04:59.472 00:04:59.472 ' 00:04:59.472 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:59.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.472 --rc genhtml_branch_coverage=1 00:04:59.472 --rc genhtml_function_coverage=1 00:04:59.472 --rc genhtml_legend=1 00:04:59.472 --rc geninfo_all_blocks=1 00:04:59.472 --rc geninfo_unexecuted_blocks=1 00:04:59.472 00:04:59.472 ' 00:04:59.472 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:59.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.472 --rc genhtml_branch_coverage=1 00:04:59.472 --rc genhtml_function_coverage=1 00:04:59.472 --rc genhtml_legend=1 00:04:59.472 --rc geninfo_all_blocks=1 00:04:59.472 --rc geninfo_unexecuted_blocks=1 00:04:59.472 00:04:59.472 ' 00:04:59.472 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:59.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.472 --rc genhtml_branch_coverage=1 00:04:59.472 --rc genhtml_function_coverage=1 00:04:59.472 --rc genhtml_legend=1 00:04:59.472 --rc geninfo_all_blocks=1 00:04:59.472 --rc geninfo_unexecuted_blocks=1 00:04:59.472 00:04:59.472 ' 00:04:59.472 08:41:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:59.472 08:41:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57947 00:04:59.472 08:41:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.472 08:41:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57947 00:04:59.472 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57947 ']' 00:04:59.472 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.472 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.472 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.472 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.472 08:41:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.472 [2024-10-05 08:41:35.649138] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:04:59.472 [2024-10-05 08:41:35.649348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57947 ] 00:04:59.472 [2024-10-05 08:41:35.813757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.732 [2024-10-05 08:41:36.057658] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.672 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.672 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:00.672 08:41:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:00.672 08:41:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:00.672 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:00.672 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.672 { 00:05:00.672 "filename": "/tmp/spdk_mem_dump.txt" 00:05:00.672 } 00:05:00.672 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:00.672 08:41:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:00.672 DPDK memory size 866.000000 MiB in 1 heap(s) 00:05:00.672 1 heaps totaling size 866.000000 MiB 00:05:00.672 size: 866.000000 MiB heap id: 0 00:05:00.672 end heaps---------- 00:05:00.672 9 mempools totaling size 642.649841 MiB 00:05:00.672 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:00.672 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:00.672 size: 92.545471 MiB name: bdev_io_57947 00:05:00.672 size: 51.011292 MiB name: evtpool_57947 00:05:00.672 size: 50.003479 MiB name: msgpool_57947 00:05:00.672 size: 36.509338 MiB name: fsdev_io_57947 00:05:00.672 size: 21.763794 MiB name: PDU_Pool 00:05:00.672 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:00.672 size: 0.026123 MiB name: Session_Pool 00:05:00.672 end mempools------- 00:05:00.672 6 memzones totaling size 4.142822 MiB 00:05:00.672 size: 1.000366 MiB name: RG_ring_0_57947 00:05:00.672 size: 1.000366 MiB name: RG_ring_1_57947 00:05:00.672 size: 1.000366 MiB name: RG_ring_4_57947 00:05:00.672 size: 1.000366 MiB name: RG_ring_5_57947 00:05:00.672 size: 0.125366 MiB name: RG_ring_2_57947 00:05:00.672 size: 0.015991 MiB name: RG_ring_3_57947 00:05:00.672 end memzones------- 00:05:00.672 08:41:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:00.935 heap id: 0 total size: 866.000000 MiB number of busy elements: 313 number of free elements: 19 00:05:00.935 list of free elements. size: 19.914062 MiB 00:05:00.935 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:00.935 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:00.935 element at address: 0x200009600000 with size: 1.995972 MiB 00:05:00.935 element at address: 0x20000d800000 with size: 1.995972 MiB 00:05:00.935 element at address: 0x200007000000 with size: 1.991028 MiB 00:05:00.935 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:05:00.935 element at address: 0x20001c300040 with size: 0.999939 MiB 00:05:00.935 element at address: 0x20001c400000 with size: 0.999084 MiB 00:05:00.935 element at address: 0x200035000000 with size: 0.994324 MiB 00:05:00.935 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:05:00.935 element at address: 0x20001c700040 with size: 0.936401 MiB 00:05:00.936 element at address: 0x200000200000 with size: 0.832153 MiB 00:05:00.936 element at address: 0x20001de00000 with size: 0.562195 MiB 00:05:00.936 element at address: 0x200003e00000 with size: 0.490173 MiB 00:05:00.936 element at address: 0x20001c000000 with size: 0.488708 MiB 00:05:00.936 element at address: 0x20001c800000 with size: 0.485413 MiB 00:05:00.936 element at address: 0x200015e00000 with size: 0.443237 MiB 00:05:00.936 element at address: 0x20002b200000 with size: 0.390442 MiB 00:05:00.936 element at address: 0x200003a00000 with size: 0.353088 MiB 00:05:00.936 list of standard malloc elements. size: 199.287231 MiB 00:05:00.936 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:05:00.936 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:05:00.936 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:05:00.936 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:05:00.936 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:05:00.936 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:00.936 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:05:00.936 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:00.936 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:05:00.936 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:05:00.936 element at address: 0x200015dff040 with size: 0.000305 MiB 00:05:00.936 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003aff800 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7d7c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003efef00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dff180 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dff280 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dff380 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dff480 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dff580 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dff680 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dff780 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dff880 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dff980 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015e71780 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015e71880 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015e71980 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015e72080 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015e72180 with size: 0.000244 MiB 00:05:00.936 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c07d1c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:05:00.936 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b264040 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:05:00.937 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:05:00.938 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:05:00.938 list of memzone associated elements. size: 646.798706 MiB 00:05:00.938 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:05:00.938 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:00.938 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:05:00.938 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:00.938 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:05:00.938 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57947_0 00:05:00.938 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:00.938 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57947_0 00:05:00.938 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:00.938 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57947_0 00:05:00.938 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:05:00.938 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57947_0 00:05:00.938 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:05:00.938 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:00.938 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:05:00.938 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:00.938 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:00.938 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57947 00:05:00.938 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:00.938 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57947 00:05:00.938 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:00.938 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57947 00:05:00.938 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:05:00.938 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:00.938 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:05:00.938 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:00.938 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:05:00.938 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:00.938 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:05:00.938 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:00.938 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:00.938 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57947 00:05:00.938 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:00.938 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57947 00:05:00.938 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:05:00.938 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57947 00:05:00.938 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:05:00.938 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57947 00:05:00.938 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:05:00.938 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57947 00:05:00.938 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:05:00.938 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57947 00:05:00.938 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:05:00.938 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:00.938 element at address: 0x200015e72280 with size: 0.500549 MiB 00:05:00.938 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:00.938 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:05:00.938 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:00.938 element at address: 0x200003a5e880 with size: 0.125549 MiB 00:05:00.938 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57947 00:05:00.938 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:05:00.938 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:00.938 element at address: 0x20002b264140 with size: 0.023804 MiB 00:05:00.938 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:00.938 element at address: 0x200003a5a640 with size: 0.016174 MiB 00:05:00.938 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57947 00:05:00.938 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:05:00.938 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:00.938 element at address: 0x2000002d6180 with size: 0.000366 MiB 00:05:00.938 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57947 00:05:00.938 element at address: 0x200003aff900 with size: 0.000366 MiB 00:05:00.938 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57947 00:05:00.938 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:05:00.938 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57947 00:05:00.938 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:05:00.938 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:00.938 08:41:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:00.938 08:41:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57947 00:05:00.938 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57947 ']' 00:05:00.938 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57947 00:05:00.938 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:00.938 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:00.938 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57947 00:05:00.938 killing process with pid 57947 00:05:00.938 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:00.938 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:00.938 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57947' 00:05:00.938 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57947 00:05:00.938 08:41:37 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57947 00:05:03.495 ************************************ 00:05:03.495 END TEST dpdk_mem_utility 00:05:03.495 ************************************ 00:05:03.495 00:05:03.495 real 0m4.537s 00:05:03.495 user 0m4.269s 00:05:03.495 sys 0m0.745s 00:05:03.495 08:41:39 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.495 08:41:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.495 08:41:39 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.495 08:41:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.495 08:41:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.495 08:41:39 -- common/autotest_common.sh@10 -- # set +x 00:05:03.495 ************************************ 00:05:03.495 START TEST event 00:05:03.495 ************************************ 00:05:03.495 08:41:39 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.755 * Looking for test storage... 00:05:03.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:03.755 08:41:40 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:03.755 08:41:40 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:03.755 08:41:40 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:03.755 08:41:40 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:03.755 08:41:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.755 08:41:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.755 08:41:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.755 08:41:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.755 08:41:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.755 08:41:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.755 08:41:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.755 08:41:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.755 08:41:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.755 08:41:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.755 08:41:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.755 08:41:40 event -- scripts/common.sh@344 -- # case "$op" in 00:05:03.755 08:41:40 event -- scripts/common.sh@345 -- # : 1 00:05:03.755 08:41:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.755 08:41:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.755 08:41:40 event -- scripts/common.sh@365 -- # decimal 1 00:05:03.755 08:41:40 event -- scripts/common.sh@353 -- # local d=1 00:05:03.755 08:41:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.755 08:41:40 event -- scripts/common.sh@355 -- # echo 1 00:05:03.755 08:41:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.755 08:41:40 event -- scripts/common.sh@366 -- # decimal 2 00:05:03.755 08:41:40 event -- scripts/common.sh@353 -- # local d=2 00:05:03.755 08:41:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.755 08:41:40 event -- scripts/common.sh@355 -- # echo 2 00:05:03.755 08:41:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.755 08:41:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.755 08:41:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.755 08:41:40 event -- scripts/common.sh@368 -- # return 0 00:05:03.755 08:41:40 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.755 08:41:40 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:03.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.755 --rc genhtml_branch_coverage=1 00:05:03.755 --rc genhtml_function_coverage=1 00:05:03.755 --rc genhtml_legend=1 00:05:03.755 --rc geninfo_all_blocks=1 00:05:03.755 --rc geninfo_unexecuted_blocks=1 00:05:03.755 00:05:03.755 ' 00:05:03.755 08:41:40 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:03.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.755 --rc genhtml_branch_coverage=1 00:05:03.755 --rc genhtml_function_coverage=1 00:05:03.755 --rc genhtml_legend=1 00:05:03.755 --rc geninfo_all_blocks=1 00:05:03.755 --rc geninfo_unexecuted_blocks=1 00:05:03.755 00:05:03.755 ' 00:05:03.755 08:41:40 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:03.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.755 --rc genhtml_branch_coverage=1 00:05:03.755 --rc genhtml_function_coverage=1 00:05:03.755 --rc genhtml_legend=1 00:05:03.755 --rc geninfo_all_blocks=1 00:05:03.755 --rc geninfo_unexecuted_blocks=1 00:05:03.755 00:05:03.755 ' 00:05:03.755 08:41:40 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:03.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.755 --rc genhtml_branch_coverage=1 00:05:03.755 --rc genhtml_function_coverage=1 00:05:03.755 --rc genhtml_legend=1 00:05:03.755 --rc geninfo_all_blocks=1 00:05:03.755 --rc geninfo_unexecuted_blocks=1 00:05:03.755 00:05:03.755 ' 00:05:03.755 08:41:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:03.755 08:41:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:03.755 08:41:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.755 08:41:40 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:03.755 08:41:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.755 08:41:40 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.755 ************************************ 00:05:03.755 START TEST event_perf 00:05:03.755 ************************************ 00:05:03.755 08:41:40 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.755 Running I/O for 1 seconds...[2024-10-05 08:41:40.203340] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:03.755 [2024-10-05 08:41:40.203454] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58061 ] 00:05:04.015 [2024-10-05 08:41:40.368754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.275 [2024-10-05 08:41:40.638481] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.275 [2024-10-05 08:41:40.638862] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.275 [2024-10-05 08:41:40.638895] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.276 Running I/O for 1 seconds...[2024-10-05 08:41:40.638721] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:05.656 00:05:05.656 lcore 0: 85128 00:05:05.656 lcore 1: 85131 00:05:05.656 lcore 2: 85128 00:05:05.656 lcore 3: 85131 00:05:05.656 done. 00:05:05.656 00:05:05.656 real 0m1.895s 00:05:05.656 user 0m4.633s 00:05:05.656 sys 0m0.137s 00:05:05.656 08:41:42 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.656 08:41:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.656 ************************************ 00:05:05.656 END TEST event_perf 00:05:05.656 ************************************ 00:05:05.656 08:41:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.656 08:41:42 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:05.656 08:41:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.656 08:41:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.656 ************************************ 00:05:05.656 START TEST event_reactor 00:05:05.656 ************************************ 00:05:05.656 08:41:42 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.917 [2024-10-05 08:41:42.173328] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:05.917 [2024-10-05 08:41:42.173492] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58106 ] 00:05:05.917 [2024-10-05 08:41:42.337260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.177 [2024-10-05 08:41:42.584861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.558 test_start 00:05:07.558 oneshot 00:05:07.558 tick 100 00:05:07.558 tick 100 00:05:07.558 tick 250 00:05:07.558 tick 100 00:05:07.558 tick 100 00:05:07.558 tick 100 00:05:07.558 tick 250 00:05:07.558 tick 500 00:05:07.558 tick 100 00:05:07.558 tick 100 00:05:07.558 tick 250 00:05:07.558 tick 100 00:05:07.558 tick 100 00:05:07.558 test_end 00:05:07.558 00:05:07.558 real 0m1.860s 00:05:07.558 user 0m1.621s 00:05:07.558 sys 0m0.130s 00:05:07.558 ************************************ 00:05:07.558 END TEST event_reactor 00:05:07.558 ************************************ 00:05:07.558 08:41:43 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.558 08:41:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:07.819 08:41:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.819 08:41:44 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:07.819 08:41:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.819 08:41:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.819 ************************************ 00:05:07.819 START TEST event_reactor_perf 00:05:07.819 ************************************ 00:05:07.819 08:41:44 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.819 [2024-10-05 08:41:44.102034] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:07.819 [2024-10-05 08:41:44.102148] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58148 ] 00:05:07.819 [2024-10-05 08:41:44.271002] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.079 [2024-10-05 08:41:44.522965] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.460 test_start 00:05:09.460 test_end 00:05:09.460 Performance: 416271 events per second 00:05:09.460 00:05:09.460 real 0m1.868s 00:05:09.460 user 0m1.625s 00:05:09.460 sys 0m0.134s 00:05:09.460 ************************************ 00:05:09.460 END TEST event_reactor_perf 00:05:09.460 ************************************ 00:05:09.460 08:41:45 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.460 08:41:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.720 08:41:45 event -- event/event.sh@49 -- # uname -s 00:05:09.720 08:41:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:09.720 08:41:45 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:09.720 08:41:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.720 08:41:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.720 08:41:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.720 ************************************ 00:05:09.720 START TEST event_scheduler 00:05:09.720 ************************************ 00:05:09.720 08:41:45 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:09.720 * Looking for test storage... 00:05:09.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:09.720 08:41:46 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:09.721 08:41:46 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:09.721 08:41:46 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:09.980 08:41:46 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.980 08:41:46 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:09.980 08:41:46 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.980 08:41:46 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:09.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.980 --rc genhtml_branch_coverage=1 00:05:09.981 --rc genhtml_function_coverage=1 00:05:09.981 --rc genhtml_legend=1 00:05:09.981 --rc geninfo_all_blocks=1 00:05:09.981 --rc geninfo_unexecuted_blocks=1 00:05:09.981 00:05:09.981 ' 00:05:09.981 08:41:46 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:09.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.981 --rc genhtml_branch_coverage=1 00:05:09.981 --rc genhtml_function_coverage=1 00:05:09.981 --rc genhtml_legend=1 00:05:09.981 --rc geninfo_all_blocks=1 00:05:09.981 --rc geninfo_unexecuted_blocks=1 00:05:09.981 00:05:09.981 ' 00:05:09.981 08:41:46 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:09.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.981 --rc genhtml_branch_coverage=1 00:05:09.981 --rc genhtml_function_coverage=1 00:05:09.981 --rc genhtml_legend=1 00:05:09.981 --rc geninfo_all_blocks=1 00:05:09.981 --rc geninfo_unexecuted_blocks=1 00:05:09.981 00:05:09.981 ' 00:05:09.981 08:41:46 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:09.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.981 --rc genhtml_branch_coverage=1 00:05:09.981 --rc genhtml_function_coverage=1 00:05:09.981 --rc genhtml_legend=1 00:05:09.981 --rc geninfo_all_blocks=1 00:05:09.981 --rc geninfo_unexecuted_blocks=1 00:05:09.981 00:05:09.981 ' 00:05:09.981 08:41:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:09.981 08:41:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58224 00:05:09.981 08:41:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:09.981 08:41:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.981 08:41:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58224 00:05:09.981 08:41:46 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58224 ']' 00:05:09.981 08:41:46 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.981 08:41:46 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.981 08:41:46 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.981 08:41:46 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.981 08:41:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.981 [2024-10-05 08:41:46.329566] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:09.981 [2024-10-05 08:41:46.329817] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58224 ] 00:05:10.241 [2024-10-05 08:41:46.484698] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.502 [2024-10-05 08:41:46.784065] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.502 [2024-10-05 08:41:46.784290] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.502 [2024-10-05 08:41:46.784434] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.502 [2024-10-05 08:41:46.784473] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.761 08:41:47 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.761 08:41:47 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:10.761 08:41:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:10.761 08:41:47 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.761 08:41:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.761 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:10.761 POWER: Cannot set governor of lcore 0 to userspace 00:05:10.761 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:10.761 POWER: Cannot set governor of lcore 0 to performance 00:05:10.761 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:10.761 POWER: Cannot set governor of lcore 0 to userspace 00:05:10.761 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:10.761 POWER: Cannot set governor of lcore 0 to userspace 00:05:10.761 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:10.761 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:10.761 POWER: Unable to set Power Management Environment for lcore 0 00:05:10.761 [2024-10-05 08:41:47.122010] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:10.761 [2024-10-05 08:41:47.122039] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:10.761 [2024-10-05 08:41:47.122052] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:10.761 [2024-10-05 08:41:47.122079] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:10.761 [2024-10-05 08:41:47.122114] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:10.761 [2024-10-05 08:41:47.122127] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:10.761 08:41:47 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.761 08:41:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:10.761 08:41:47 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.761 08:41:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.333 [2024-10-05 08:41:47.506645] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:11.333 08:41:47 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.333 08:41:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:11.333 08:41:47 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.333 08:41:47 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.333 08:41:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.333 ************************************ 00:05:11.333 START TEST scheduler_create_thread 00:05:11.333 ************************************ 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.333 2 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.333 3 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.333 4 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.333 5 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.333 6 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.333 7 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.333 8 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.333 9 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.333 08:41:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.714 10 00:05:12.714 08:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.714 08:41:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:12.714 08:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.714 08:41:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.285 08:41:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.285 08:41:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:13.285 08:41:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:13.285 08:41:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.285 08:41:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.223 08:41:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.223 08:41:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:14.223 08:41:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.223 08:41:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.791 08:41:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.791 08:41:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:14.791 08:41:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:14.791 08:41:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.791 08:41:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.359 ************************************ 00:05:15.359 END TEST scheduler_create_thread 00:05:15.359 ************************************ 00:05:15.359 08:41:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.359 00:05:15.359 real 0m4.204s 00:05:15.359 user 0m0.023s 00:05:15.359 sys 0m0.013s 00:05:15.359 08:41:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.359 08:41:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.359 08:41:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:15.359 08:41:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58224 00:05:15.359 08:41:51 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58224 ']' 00:05:15.359 08:41:51 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58224 00:05:15.359 08:41:51 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:15.359 08:41:51 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.359 08:41:51 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58224 00:05:15.359 killing process with pid 58224 00:05:15.359 08:41:51 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:15.359 08:41:51 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:15.359 08:41:51 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58224' 00:05:15.360 08:41:51 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58224 00:05:15.360 08:41:51 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58224 00:05:15.619 [2024-10-05 08:41:52.006328] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:17.525 00:05:17.525 real 0m7.474s 00:05:17.525 user 0m16.300s 00:05:17.525 sys 0m0.629s 00:05:17.525 08:41:53 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.526 ************************************ 00:05:17.526 END TEST event_scheduler 00:05:17.526 ************************************ 00:05:17.526 08:41:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.526 08:41:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:17.526 08:41:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:17.526 08:41:53 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.526 08:41:53 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.526 08:41:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.526 ************************************ 00:05:17.526 START TEST app_repeat 00:05:17.526 ************************************ 00:05:17.526 08:41:53 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58352 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.526 Process app_repeat pid: 58352 00:05:17.526 spdk_app_start Round 0 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58352' 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:17.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.526 08:41:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58352 /var/tmp/spdk-nbd.sock 00:05:17.526 08:41:53 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58352 ']' 00:05:17.526 08:41:53 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.526 08:41:53 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.526 08:41:53 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.526 08:41:53 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.526 08:41:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.526 [2024-10-05 08:41:53.613636] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:17.526 [2024-10-05 08:41:53.613735] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58352 ] 00:05:17.526 [2024-10-05 08:41:53.777463] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.786 [2024-10-05 08:41:54.041899] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.786 [2024-10-05 08:41:54.041934] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.046 08:41:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.046 08:41:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:18.046 08:41:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.305 Malloc0 00:05:18.305 08:41:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.566 Malloc1 00:05:18.566 08:41:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.566 08:41:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.826 /dev/nbd0 00:05:18.826 08:41:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.826 08:41:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.826 1+0 records in 00:05:18.826 1+0 records out 00:05:18.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472717 s, 8.7 MB/s 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:18.826 08:41:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:18.826 08:41:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.826 08:41:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.826 08:41:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.086 /dev/nbd1 00:05:19.086 08:41:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.086 08:41:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.086 1+0 records in 00:05:19.086 1+0 records out 00:05:19.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443902 s, 9.2 MB/s 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:19.086 08:41:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:19.086 08:41:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.086 08:41:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.086 08:41:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.086 08:41:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.086 08:41:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.346 { 00:05:19.346 "nbd_device": "/dev/nbd0", 00:05:19.346 "bdev_name": "Malloc0" 00:05:19.346 }, 00:05:19.346 { 00:05:19.346 "nbd_device": "/dev/nbd1", 00:05:19.346 "bdev_name": "Malloc1" 00:05:19.346 } 00:05:19.346 ]' 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.346 { 00:05:19.346 "nbd_device": "/dev/nbd0", 00:05:19.346 "bdev_name": "Malloc0" 00:05:19.346 }, 00:05:19.346 { 00:05:19.346 "nbd_device": "/dev/nbd1", 00:05:19.346 "bdev_name": "Malloc1" 00:05:19.346 } 00:05:19.346 ]' 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.346 /dev/nbd1' 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.346 /dev/nbd1' 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.346 256+0 records in 00:05:19.346 256+0 records out 00:05:19.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138836 s, 75.5 MB/s 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.346 08:41:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.607 256+0 records in 00:05:19.607 256+0 records out 00:05:19.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0266815 s, 39.3 MB/s 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.607 256+0 records in 00:05:19.607 256+0 records out 00:05:19.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281819 s, 37.2 MB/s 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.607 08:41:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.867 08:41:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.867 08:41:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.867 08:41:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.867 08:41:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.867 08:41:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.867 08:41:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.867 08:41:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.867 08:41:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.867 08:41:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.867 08:41:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.867 08:41:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:20.127 08:41:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:20.127 08:41:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.697 08:41:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:22.085 [2024-10-05 08:41:58.361224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.345 [2024-10-05 08:41:58.585673] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.345 [2024-10-05 08:41:58.585676] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.345 [2024-10-05 08:41:58.801764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.345 [2024-10-05 08:41:58.802016] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.726 spdk_app_start Round 1 00:05:23.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.726 08:42:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.726 08:42:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:23.726 08:42:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58352 /var/tmp/spdk-nbd.sock 00:05:23.726 08:42:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58352 ']' 00:05:23.726 08:42:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.726 08:42:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.726 08:42:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.726 08:42:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.726 08:42:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.986 08:42:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.986 08:42:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:23.986 08:42:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.246 Malloc0 00:05:24.246 08:42:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.506 Malloc1 00:05:24.506 08:42:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:24.506 /dev/nbd0 00:05:24.506 08:42:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:24.766 08:42:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.766 1+0 records in 00:05:24.766 1+0 records out 00:05:24.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203957 s, 20.1 MB/s 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:24.766 08:42:00 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:24.766 08:42:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.766 08:42:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.766 08:42:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.766 /dev/nbd1 00:05:24.766 08:42:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.766 08:42:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.766 1+0 records in 00:05:24.766 1+0 records out 00:05:24.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207612 s, 19.7 MB/s 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:24.766 08:42:01 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:24.766 08:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.766 08:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.766 08:42:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.766 08:42:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.766 08:42:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.025 { 00:05:25.025 "nbd_device": "/dev/nbd0", 00:05:25.025 "bdev_name": "Malloc0" 00:05:25.025 }, 00:05:25.025 { 00:05:25.025 "nbd_device": "/dev/nbd1", 00:05:25.025 "bdev_name": "Malloc1" 00:05:25.025 } 00:05:25.025 ]' 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.025 { 00:05:25.025 "nbd_device": "/dev/nbd0", 00:05:25.025 "bdev_name": "Malloc0" 00:05:25.025 }, 00:05:25.025 { 00:05:25.025 "nbd_device": "/dev/nbd1", 00:05:25.025 "bdev_name": "Malloc1" 00:05:25.025 } 00:05:25.025 ]' 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.025 /dev/nbd1' 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.025 /dev/nbd1' 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.025 256+0 records in 00:05:25.025 256+0 records out 00:05:25.025 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465244 s, 225 MB/s 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.025 08:42:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.285 256+0 records in 00:05:25.285 256+0 records out 00:05:25.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240759 s, 43.6 MB/s 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.285 256+0 records in 00:05:25.285 256+0 records out 00:05:25.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264375 s, 39.7 MB/s 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.285 08:42:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.545 08:42:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.545 08:42:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.545 08:42:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.545 08:42:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.545 08:42:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.545 08:42:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.545 08:42:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.545 08:42:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.545 08:42:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.545 08:42:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.545 08:42:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.545 08:42:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.545 08:42:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.545 08:42:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.545 08:42:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.545 08:42:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.804 08:42:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.804 08:42:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.804 08:42:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.804 08:42:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.804 08:42:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.804 08:42:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.804 08:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.804 08:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.804 08:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.804 08:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.804 08:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.804 08:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.065 08:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.065 08:42:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.065 08:42:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.065 08:42:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.065 08:42:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.065 08:42:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.324 08:42:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.704 [2024-10-05 08:42:04.024071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.963 [2024-10-05 08:42:04.250156] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.963 [2024-10-05 08:42:04.250191] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.221 [2024-10-05 08:42:04.465580] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.221 [2024-10-05 08:42:04.465758] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:29.600 spdk_app_start Round 2 00:05:29.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.600 08:42:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:29.600 08:42:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:29.600 08:42:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58352 /var/tmp/spdk-nbd.sock 00:05:29.600 08:42:05 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58352 ']' 00:05:29.600 08:42:05 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.600 08:42:05 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.600 08:42:05 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.600 08:42:05 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.600 08:42:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:29.600 08:42:05 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.600 08:42:05 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:29.600 08:42:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.860 Malloc0 00:05:29.860 08:42:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.120 Malloc1 00:05:30.120 08:42:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.120 08:42:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.379 /dev/nbd0 00:05:30.379 08:42:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.379 08:42:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.379 1+0 records in 00:05:30.379 1+0 records out 00:05:30.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168052 s, 24.4 MB/s 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:30.379 08:42:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:30.379 08:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.379 08:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.379 08:42:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.379 /dev/nbd1 00:05:30.638 08:42:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.638 08:42:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.638 1+0 records in 00:05:30.638 1+0 records out 00:05:30.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224099 s, 18.3 MB/s 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:30.638 08:42:06 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:30.638 08:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.638 08:42:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.638 08:42:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.638 08:42:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.638 08:42:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.638 08:42:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:30.638 { 00:05:30.638 "nbd_device": "/dev/nbd0", 00:05:30.638 "bdev_name": "Malloc0" 00:05:30.638 }, 00:05:30.638 { 00:05:30.638 "nbd_device": "/dev/nbd1", 00:05:30.638 "bdev_name": "Malloc1" 00:05:30.638 } 00:05:30.638 ]' 00:05:30.638 08:42:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:30.638 { 00:05:30.638 "nbd_device": "/dev/nbd0", 00:05:30.638 "bdev_name": "Malloc0" 00:05:30.638 }, 00:05:30.638 { 00:05:30.638 "nbd_device": "/dev/nbd1", 00:05:30.638 "bdev_name": "Malloc1" 00:05:30.638 } 00:05:30.638 ]' 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:30.898 /dev/nbd1' 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:30.898 /dev/nbd1' 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:30.898 256+0 records in 00:05:30.898 256+0 records out 00:05:30.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139062 s, 75.4 MB/s 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:30.898 256+0 records in 00:05:30.898 256+0 records out 00:05:30.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235367 s, 44.6 MB/s 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:30.898 256+0 records in 00:05:30.898 256+0 records out 00:05:30.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232466 s, 45.1 MB/s 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.898 08:42:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.161 08:42:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.161 08:42:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.161 08:42:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.161 08:42:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.161 08:42:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.161 08:42:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.161 08:42:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.161 08:42:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.161 08:42:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.161 08:42:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.419 08:42:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.678 08:42:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.678 08:42:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.678 08:42:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.678 08:42:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:31.938 08:42:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:33.319 [2024-10-05 08:42:09.656853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.579 [2024-10-05 08:42:09.878337] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.579 [2024-10-05 08:42:09.878340] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.839 [2024-10-05 08:42:10.095037] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.839 [2024-10-05 08:42:10.095109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.219 08:42:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58352 /var/tmp/spdk-nbd.sock 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58352 ']' 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:35.219 08:42:11 event.app_repeat -- event/event.sh@39 -- # killprocess 58352 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58352 ']' 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58352 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58352 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58352' 00:05:35.219 killing process with pid 58352 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58352 00:05:35.219 08:42:11 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58352 00:05:36.601 spdk_app_start is called in Round 0. 00:05:36.601 Shutdown signal received, stop current app iteration 00:05:36.601 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:05:36.601 spdk_app_start is called in Round 1. 00:05:36.601 Shutdown signal received, stop current app iteration 00:05:36.601 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:05:36.601 spdk_app_start is called in Round 2. 00:05:36.601 Shutdown signal received, stop current app iteration 00:05:36.601 Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 reinitialization... 00:05:36.601 spdk_app_start is called in Round 3. 00:05:36.601 Shutdown signal received, stop current app iteration 00:05:36.601 08:42:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:36.601 08:42:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:36.601 00:05:36.601 real 0m19.246s 00:05:36.601 user 0m39.493s 00:05:36.601 sys 0m2.959s 00:05:36.601 08:42:12 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.601 08:42:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.601 ************************************ 00:05:36.601 END TEST app_repeat 00:05:36.601 ************************************ 00:05:36.601 08:42:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:36.601 08:42:12 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:36.601 08:42:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.601 08:42:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.601 08:42:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.601 ************************************ 00:05:36.601 START TEST cpu_locks 00:05:36.601 ************************************ 00:05:36.601 08:42:12 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:36.601 * Looking for test storage... 00:05:36.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:36.601 08:42:12 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:36.601 08:42:12 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:36.601 08:42:12 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:36.601 08:42:13 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.601 08:42:13 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.861 08:42:13 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:36.861 08:42:13 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.861 08:42:13 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:36.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.861 --rc genhtml_branch_coverage=1 00:05:36.861 --rc genhtml_function_coverage=1 00:05:36.861 --rc genhtml_legend=1 00:05:36.861 --rc geninfo_all_blocks=1 00:05:36.861 --rc geninfo_unexecuted_blocks=1 00:05:36.861 00:05:36.861 ' 00:05:36.861 08:42:13 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:36.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.861 --rc genhtml_branch_coverage=1 00:05:36.861 --rc genhtml_function_coverage=1 00:05:36.861 --rc genhtml_legend=1 00:05:36.861 --rc geninfo_all_blocks=1 00:05:36.861 --rc geninfo_unexecuted_blocks=1 00:05:36.861 00:05:36.861 ' 00:05:36.861 08:42:13 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:36.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.861 --rc genhtml_branch_coverage=1 00:05:36.861 --rc genhtml_function_coverage=1 00:05:36.861 --rc genhtml_legend=1 00:05:36.861 --rc geninfo_all_blocks=1 00:05:36.861 --rc geninfo_unexecuted_blocks=1 00:05:36.861 00:05:36.861 ' 00:05:36.861 08:42:13 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:36.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.861 --rc genhtml_branch_coverage=1 00:05:36.861 --rc genhtml_function_coverage=1 00:05:36.861 --rc genhtml_legend=1 00:05:36.861 --rc geninfo_all_blocks=1 00:05:36.861 --rc geninfo_unexecuted_blocks=1 00:05:36.861 00:05:36.861 ' 00:05:36.861 08:42:13 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:36.861 08:42:13 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:36.861 08:42:13 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:36.861 08:42:13 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:36.861 08:42:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.861 08:42:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.861 08:42:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.861 ************************************ 00:05:36.861 START TEST default_locks 00:05:36.861 ************************************ 00:05:36.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.861 08:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:36.861 08:42:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58794 00:05:36.861 08:42:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58794 00:05:36.861 08:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58794 ']' 00:05:36.861 08:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.861 08:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.861 08:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.861 08:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.861 08:42:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.861 08:42:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.862 [2024-10-05 08:42:13.193524] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:36.862 [2024-10-05 08:42:13.193656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58794 ] 00:05:37.121 [2024-10-05 08:42:13.357397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.380 [2024-10-05 08:42:13.612517] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.316 08:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.316 08:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:38.316 08:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58794 00:05:38.316 08:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.316 08:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58794 00:05:38.574 08:42:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58794 00:05:38.574 08:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58794 ']' 00:05:38.574 08:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58794 00:05:38.574 08:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:38.574 08:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.574 08:42:14 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58794 00:05:38.574 killing process with pid 58794 00:05:38.574 08:42:15 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.574 08:42:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.574 08:42:15 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58794' 00:05:38.574 08:42:15 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58794 00:05:38.574 08:42:15 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58794 00:05:41.932 08:42:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58794 00:05:41.932 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:41.932 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58794 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58794 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58794 ']' 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.933 ERROR: process (pid: 58794) is no longer running 00:05:41.933 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58794) - No such process 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:41.933 00:05:41.933 real 0m4.548s 00:05:41.933 user 0m4.320s 00:05:41.933 sys 0m0.784s 00:05:41.933 ************************************ 00:05:41.933 END TEST default_locks 00:05:41.933 ************************************ 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.933 08:42:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.933 08:42:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:41.933 08:42:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.933 08:42:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.933 08:42:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.933 ************************************ 00:05:41.933 START TEST default_locks_via_rpc 00:05:41.933 ************************************ 00:05:41.933 08:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:41.933 08:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58874 00:05:41.933 08:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.933 08:42:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58874 00:05:41.933 08:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58874 ']' 00:05:41.933 08:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.933 08:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.933 08:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.933 08:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.933 08:42:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.933 [2024-10-05 08:42:17.815626] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:41.933 [2024-10-05 08:42:17.815741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58874 ] 00:05:41.933 [2024-10-05 08:42:17.981238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.933 [2024-10-05 08:42:18.219162] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58874 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58874 00:05:42.871 08:42:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.439 08:42:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58874 00:05:43.439 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58874 ']' 00:05:43.439 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58874 00:05:43.439 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:43.439 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.439 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58874 00:05:43.439 killing process with pid 58874 00:05:43.439 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.439 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.439 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58874' 00:05:43.439 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58874 00:05:43.439 08:42:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58874 00:05:45.980 00:05:45.980 real 0m4.670s 00:05:45.980 user 0m4.437s 00:05:45.980 sys 0m0.849s 00:05:45.980 08:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.980 08:42:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.980 ************************************ 00:05:45.980 END TEST default_locks_via_rpc 00:05:45.980 ************************************ 00:05:45.980 08:42:22 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:45.980 08:42:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.980 08:42:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.980 08:42:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.240 ************************************ 00:05:46.240 START TEST non_locking_app_on_locked_coremask 00:05:46.240 ************************************ 00:05:46.240 08:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:46.240 08:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58954 00:05:46.240 08:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.240 08:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58954 /var/tmp/spdk.sock 00:05:46.240 08:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58954 ']' 00:05:46.240 08:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.240 08:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.240 08:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.240 08:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.240 08:42:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.240 [2024-10-05 08:42:22.559224] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:46.240 [2024-10-05 08:42:22.559352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58954 ] 00:05:46.500 [2024-10-05 08:42:22.727148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.500 [2024-10-05 08:42:22.964272] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.440 08:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.440 08:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:47.440 08:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58975 00:05:47.440 08:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58975 /var/tmp/spdk2.sock 00:05:47.440 08:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58975 ']' 00:05:47.440 08:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.440 08:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.440 08:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.440 08:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.440 08:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.440 08:42:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:47.699 [2024-10-05 08:42:24.006521] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:47.699 [2024-10-05 08:42:24.006638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58975 ] 00:05:47.699 [2024-10-05 08:42:24.157722] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.699 [2024-10-05 08:42:24.157778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.270 [2024-10-05 08:42:24.656685] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.179 08:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.179 08:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.179 08:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58954 00:05:50.179 08:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58954 00:05:50.179 08:42:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.119 08:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58954 00:05:51.119 08:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58954 ']' 00:05:51.119 08:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58954 00:05:51.119 08:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:51.119 08:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.119 08:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58954 00:05:51.119 killing process with pid 58954 00:05:51.119 08:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.119 08:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.119 08:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58954' 00:05:51.119 08:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58954 00:05:51.119 08:42:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58954 00:05:56.419 08:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58975 00:05:56.419 08:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58975 ']' 00:05:56.419 08:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58975 00:05:56.419 08:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:56.419 08:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.419 08:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58975 00:05:56.419 killing process with pid 58975 00:05:56.419 08:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.419 08:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.419 08:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58975' 00:05:56.419 08:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58975 00:05:56.419 08:42:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58975 00:05:58.962 00:05:58.962 real 0m12.858s 00:05:58.962 user 0m12.718s 00:05:58.962 sys 0m1.733s 00:05:58.962 08:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.962 ************************************ 00:05:58.962 END TEST non_locking_app_on_locked_coremask 00:05:58.962 ************************************ 00:05:58.962 08:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.962 08:42:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:58.962 08:42:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.962 08:42:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.962 08:42:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.962 ************************************ 00:05:58.962 START TEST locking_app_on_unlocked_coremask 00:05:58.962 ************************************ 00:05:58.962 08:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:58.962 08:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59138 00:05:58.962 08:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:58.962 08:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59138 /var/tmp/spdk.sock 00:05:58.963 08:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59138 ']' 00:05:58.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.963 08:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.963 08:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.963 08:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.963 08:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.963 08:42:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.270 [2024-10-05 08:42:35.473568] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:05:59.270 [2024-10-05 08:42:35.473684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59138 ] 00:05:59.270 [2024-10-05 08:42:35.638092] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.270 [2024-10-05 08:42:35.638184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.528 [2024-10-05 08:42:35.883783] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.463 08:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.464 08:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:00.464 08:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.464 08:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59159 00:06:00.464 08:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59159 /var/tmp/spdk2.sock 00:06:00.464 08:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59159 ']' 00:06:00.464 08:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.464 08:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.464 08:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.464 08:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.464 08:42:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.722 [2024-10-05 08:42:36.944676] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:00.722 [2024-10-05 08:42:36.944897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59159 ] 00:06:00.722 [2024-10-05 08:42:37.097399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.291 [2024-10-05 08:42:37.575926] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.197 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.197 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:03.197 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59159 00:06:03.197 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59159 00:06:03.197 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.767 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59138 00:06:03.767 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59138 ']' 00:06:03.767 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59138 00:06:03.767 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:03.767 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.767 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59138 00:06:03.767 killing process with pid 59138 00:06:03.767 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.767 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.767 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59138' 00:06:03.767 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59138 00:06:03.767 08:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59138 00:06:09.081 08:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59159 00:06:09.081 08:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59159 ']' 00:06:09.081 08:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59159 00:06:09.081 08:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:09.081 08:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.081 08:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59159 00:06:09.081 killing process with pid 59159 00:06:09.081 08:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.081 08:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.081 08:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59159' 00:06:09.081 08:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59159 00:06:09.081 08:42:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59159 00:06:11.623 00:06:11.624 real 0m12.552s 00:06:11.624 user 0m12.404s 00:06:11.624 sys 0m1.553s 00:06:11.624 08:42:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.624 ************************************ 00:06:11.624 END TEST locking_app_on_unlocked_coremask 00:06:11.624 ************************************ 00:06:11.624 08:42:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.624 08:42:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:11.624 08:42:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.624 08:42:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.624 08:42:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.624 ************************************ 00:06:11.624 START TEST locking_app_on_locked_coremask 00:06:11.624 ************************************ 00:06:11.624 08:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:11.624 08:42:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59315 00:06:11.624 08:42:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.624 08:42:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59315 /var/tmp/spdk.sock 00:06:11.624 08:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59315 ']' 00:06:11.624 08:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.624 08:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.624 08:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.624 08:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.624 08:42:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.624 [2024-10-05 08:42:48.091837] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:11.624 [2024-10-05 08:42:48.091987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59315 ] 00:06:11.884 [2024-10-05 08:42:48.238351] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.143 [2024-10-05 08:42:48.480367] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59336 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59336 /var/tmp/spdk2.sock 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59336 /var/tmp/spdk2.sock 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:13.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59336 /var/tmp/spdk2.sock 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59336 ']' 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.083 08:42:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.343 [2024-10-05 08:42:49.571410] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:13.343 [2024-10-05 08:42:49.571523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59336 ] 00:06:13.343 [2024-10-05 08:42:49.724281] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59315 has claimed it. 00:06:13.343 [2024-10-05 08:42:49.724345] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:13.912 ERROR: process (pid: 59336) is no longer running 00:06:13.912 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59336) - No such process 00:06:13.912 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.912 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:13.912 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:13.912 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.912 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.912 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.912 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59315 00:06:13.912 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59315 00:06:13.912 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.173 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59315 00:06:14.173 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59315 ']' 00:06:14.173 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59315 00:06:14.173 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:14.173 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.173 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59315 00:06:14.173 killing process with pid 59315 00:06:14.173 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.173 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.173 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59315' 00:06:14.173 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59315 00:06:14.173 08:42:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59315 00:06:16.714 00:06:16.714 real 0m5.171s 00:06:16.714 user 0m5.078s 00:06:16.714 sys 0m0.883s 00:06:16.714 08:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.714 08:42:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.714 ************************************ 00:06:16.714 END TEST locking_app_on_locked_coremask 00:06:16.714 ************************************ 00:06:16.974 08:42:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:16.974 08:42:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.974 08:42:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.974 08:42:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.974 ************************************ 00:06:16.974 START TEST locking_overlapped_coremask 00:06:16.974 ************************************ 00:06:16.974 08:42:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:16.974 08:42:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59406 00:06:16.974 08:42:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:16.974 08:42:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59406 /var/tmp/spdk.sock 00:06:16.974 08:42:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59406 ']' 00:06:16.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.974 08:42:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.974 08:42:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.974 08:42:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.974 08:42:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.974 08:42:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.974 [2024-10-05 08:42:53.329175] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:16.974 [2024-10-05 08:42:53.329292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59406 ] 00:06:17.235 [2024-10-05 08:42:53.492561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.494 [2024-10-05 08:42:53.745931] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.494 [2024-10-05 08:42:53.746073] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.494 [2024-10-05 08:42:53.746032] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59429 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59429 /var/tmp/spdk2.sock 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59429 /var/tmp/spdk2.sock 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:18.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59429 /var/tmp/spdk2.sock 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59429 ']' 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.446 08:42:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.446 [2024-10-05 08:42:54.832795] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:18.446 [2024-10-05 08:42:54.832908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59429 ] 00:06:18.722 [2024-10-05 08:42:54.990434] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59406 has claimed it. 00:06:18.722 [2024-10-05 08:42:54.990667] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.983 ERROR: process (pid: 59429) is no longer running 00:06:18.983 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59429) - No such process 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59406 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59406 ']' 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59406 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:18.983 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.243 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59406 00:06:19.243 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.243 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.243 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59406' 00:06:19.243 killing process with pid 59406 00:06:19.243 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59406 00:06:19.243 08:42:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59406 00:06:22.536 00:06:22.536 real 0m5.026s 00:06:22.536 user 0m12.972s 00:06:22.536 sys 0m0.774s 00:06:22.536 08:42:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.536 08:42:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.536 ************************************ 00:06:22.536 END TEST locking_overlapped_coremask 00:06:22.536 ************************************ 00:06:22.537 08:42:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:22.537 08:42:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.537 08:42:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.537 08:42:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.537 ************************************ 00:06:22.537 START TEST locking_overlapped_coremask_via_rpc 00:06:22.537 ************************************ 00:06:22.537 08:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:22.537 08:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59499 00:06:22.537 08:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:22.537 08:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59499 /var/tmp/spdk.sock 00:06:22.537 08:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59499 ']' 00:06:22.537 08:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.537 08:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.537 08:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.537 08:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.537 08:42:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.537 [2024-10-05 08:42:58.435903] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:22.537 [2024-10-05 08:42:58.436170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59499 ] 00:06:22.537 [2024-10-05 08:42:58.598744] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.537 [2024-10-05 08:42:58.598921] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.537 [2024-10-05 08:42:58.864032] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.537 [2024-10-05 08:42:58.864156] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.537 [2024-10-05 08:42:58.864204] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.477 08:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.477 08:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:23.477 08:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59517 00:06:23.477 08:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:23.477 08:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59517 /var/tmp/spdk2.sock 00:06:23.477 08:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59517 ']' 00:06:23.477 08:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.477 08:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.477 08:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.477 08:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.477 08:42:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.737 [2024-10-05 08:42:59.991820] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:23.737 [2024-10-05 08:42:59.992040] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59517 ] 00:06:23.737 [2024-10-05 08:43:00.150691] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.737 [2024-10-05 08:43:00.150881] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.306 [2024-10-05 08:43:00.695019] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.306 [2024-10-05 08:43:00.695239] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.306 [2024-10-05 08:43:00.695355] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.843 [2024-10-05 08:43:02.746134] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59499 has claimed it. 00:06:26.843 request: 00:06:26.843 { 00:06:26.843 "method": "framework_enable_cpumask_locks", 00:06:26.843 "req_id": 1 00:06:26.843 } 00:06:26.843 Got JSON-RPC error response 00:06:26.843 response: 00:06:26.843 { 00:06:26.843 "code": -32603, 00:06:26.843 "message": "Failed to claim CPU core: 2" 00:06:26.843 } 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59499 /var/tmp/spdk.sock 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59499 ']' 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59517 /var/tmp/spdk2.sock 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59517 ']' 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.843 08:43:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.843 08:43:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.843 08:43:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:26.843 08:43:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:26.843 08:43:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:26.843 08:43:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:26.843 08:43:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:26.843 00:06:26.843 real 0m4.845s 00:06:26.843 user 0m1.270s 00:06:26.843 sys 0m0.195s 00:06:26.843 08:43:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.843 08:43:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.843 ************************************ 00:06:26.843 END TEST locking_overlapped_coremask_via_rpc 00:06:26.843 ************************************ 00:06:26.843 08:43:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:26.843 08:43:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59499 ]] 00:06:26.843 08:43:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59499 00:06:26.843 08:43:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59499 ']' 00:06:26.843 08:43:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59499 00:06:26.843 08:43:03 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:26.843 08:43:03 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.843 08:43:03 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59499 00:06:26.843 killing process with pid 59499 00:06:26.843 08:43:03 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.843 08:43:03 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.843 08:43:03 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59499' 00:06:26.843 08:43:03 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59499 00:06:26.843 08:43:03 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59499 00:06:30.143 08:43:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59517 ]] 00:06:30.143 08:43:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59517 00:06:30.143 08:43:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59517 ']' 00:06:30.143 08:43:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59517 00:06:30.143 08:43:06 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:30.143 08:43:06 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.143 08:43:06 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59517 00:06:30.143 killing process with pid 59517 00:06:30.143 08:43:06 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:30.143 08:43:06 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:30.143 08:43:06 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59517' 00:06:30.143 08:43:06 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59517 00:06:30.143 08:43:06 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59517 00:06:32.683 08:43:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.683 08:43:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:32.683 08:43:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59499 ]] 00:06:32.683 08:43:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59499 00:06:32.683 08:43:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59499 ']' 00:06:32.683 Process with pid 59499 is not found 00:06:32.683 08:43:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59499 00:06:32.683 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59499) - No such process 00:06:32.683 08:43:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59499 is not found' 00:06:32.683 08:43:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59517 ]] 00:06:32.683 08:43:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59517 00:06:32.683 08:43:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59517 ']' 00:06:32.683 08:43:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59517 00:06:32.683 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59517) - No such process 00:06:32.683 08:43:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59517 is not found' 00:06:32.683 Process with pid 59517 is not found 00:06:32.683 08:43:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.683 00:06:32.683 real 0m55.925s 00:06:32.683 user 1m31.800s 00:06:32.683 sys 0m8.389s 00:06:32.683 08:43:08 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.683 ************************************ 00:06:32.683 END TEST cpu_locks 00:06:32.683 ************************************ 00:06:32.683 08:43:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.683 ************************************ 00:06:32.683 END TEST event 00:06:32.683 ************************************ 00:06:32.683 00:06:32.683 real 1m28.908s 00:06:32.683 user 2m35.707s 00:06:32.683 sys 0m12.790s 00:06:32.683 08:43:08 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.683 08:43:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.683 08:43:08 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:32.683 08:43:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.683 08:43:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.683 08:43:08 -- common/autotest_common.sh@10 -- # set +x 00:06:32.683 ************************************ 00:06:32.683 START TEST thread 00:06:32.683 ************************************ 00:06:32.683 08:43:08 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:32.683 * Looking for test storage... 00:06:32.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:32.683 08:43:09 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:32.683 08:43:09 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:32.683 08:43:09 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:32.683 08:43:09 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:32.683 08:43:09 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.683 08:43:09 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.683 08:43:09 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.683 08:43:09 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.683 08:43:09 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.683 08:43:09 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.683 08:43:09 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.683 08:43:09 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.683 08:43:09 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.683 08:43:09 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.683 08:43:09 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.683 08:43:09 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:32.683 08:43:09 thread -- scripts/common.sh@345 -- # : 1 00:06:32.683 08:43:09 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.683 08:43:09 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.683 08:43:09 thread -- scripts/common.sh@365 -- # decimal 1 00:06:32.683 08:43:09 thread -- scripts/common.sh@353 -- # local d=1 00:06:32.683 08:43:09 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.683 08:43:09 thread -- scripts/common.sh@355 -- # echo 1 00:06:32.683 08:43:09 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.683 08:43:09 thread -- scripts/common.sh@366 -- # decimal 2 00:06:32.683 08:43:09 thread -- scripts/common.sh@353 -- # local d=2 00:06:32.683 08:43:09 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.683 08:43:09 thread -- scripts/common.sh@355 -- # echo 2 00:06:32.683 08:43:09 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.683 08:43:09 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.683 08:43:09 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.683 08:43:09 thread -- scripts/common.sh@368 -- # return 0 00:06:32.683 08:43:09 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.683 08:43:09 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:32.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.683 --rc genhtml_branch_coverage=1 00:06:32.683 --rc genhtml_function_coverage=1 00:06:32.683 --rc genhtml_legend=1 00:06:32.683 --rc geninfo_all_blocks=1 00:06:32.683 --rc geninfo_unexecuted_blocks=1 00:06:32.683 00:06:32.683 ' 00:06:32.683 08:43:09 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:32.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.683 --rc genhtml_branch_coverage=1 00:06:32.683 --rc genhtml_function_coverage=1 00:06:32.683 --rc genhtml_legend=1 00:06:32.683 --rc geninfo_all_blocks=1 00:06:32.683 --rc geninfo_unexecuted_blocks=1 00:06:32.683 00:06:32.683 ' 00:06:32.683 08:43:09 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:32.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.684 --rc genhtml_branch_coverage=1 00:06:32.684 --rc genhtml_function_coverage=1 00:06:32.684 --rc genhtml_legend=1 00:06:32.684 --rc geninfo_all_blocks=1 00:06:32.684 --rc geninfo_unexecuted_blocks=1 00:06:32.684 00:06:32.684 ' 00:06:32.684 08:43:09 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:32.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.684 --rc genhtml_branch_coverage=1 00:06:32.684 --rc genhtml_function_coverage=1 00:06:32.684 --rc genhtml_legend=1 00:06:32.684 --rc geninfo_all_blocks=1 00:06:32.684 --rc geninfo_unexecuted_blocks=1 00:06:32.684 00:06:32.684 ' 00:06:32.684 08:43:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.684 08:43:09 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:32.684 08:43:09 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.684 08:43:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.684 ************************************ 00:06:32.684 START TEST thread_poller_perf 00:06:32.684 ************************************ 00:06:32.684 08:43:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.944 [2024-10-05 08:43:09.181711] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:32.944 [2024-10-05 08:43:09.181891] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59723 ] 00:06:32.944 [2024-10-05 08:43:09.346990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.204 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:33.204 [2024-10-05 08:43:09.606171] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.585 ====================================== 00:06:34.585 busy:2299485018 (cyc) 00:06:34.585 total_run_count: 426000 00:06:34.585 tsc_hz: 2290000000 (cyc) 00:06:34.585 ====================================== 00:06:34.585 poller_cost: 5397 (cyc), 2356 (nsec) 00:06:34.585 00:06:34.585 real 0m1.880s 00:06:34.585 user 0m1.640s 00:06:34.585 sys 0m0.132s 00:06:34.585 08:43:11 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.585 ************************************ 00:06:34.585 END TEST thread_poller_perf 00:06:34.585 ************************************ 00:06:34.585 08:43:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.846 08:43:11 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.846 08:43:11 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:34.846 08:43:11 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.846 08:43:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.846 ************************************ 00:06:34.846 START TEST thread_poller_perf 00:06:34.846 ************************************ 00:06:34.846 08:43:11 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.846 [2024-10-05 08:43:11.147062] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:34.846 [2024-10-05 08:43:11.147169] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59765 ] 00:06:35.105 [2024-10-05 08:43:11.316708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.105 [2024-10-05 08:43:11.555695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.106 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:36.488 ====================================== 00:06:36.488 busy:2293275312 (cyc) 00:06:36.488 total_run_count: 5615000 00:06:36.488 tsc_hz: 2290000000 (cyc) 00:06:36.488 ====================================== 00:06:36.488 poller_cost: 408 (cyc), 178 (nsec) 00:06:36.488 ************************************ 00:06:36.488 END TEST thread_poller_perf 00:06:36.488 ************************************ 00:06:36.488 00:06:36.488 real 0m1.859s 00:06:36.488 user 0m1.605s 00:06:36.488 sys 0m0.146s 00:06:36.488 08:43:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.488 08:43:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.748 08:43:13 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:36.748 ************************************ 00:06:36.748 END TEST thread 00:06:36.748 ************************************ 00:06:36.748 00:06:36.748 real 0m4.110s 00:06:36.748 user 0m3.402s 00:06:36.748 sys 0m0.509s 00:06:36.748 08:43:13 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.748 08:43:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.748 08:43:13 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:36.748 08:43:13 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:36.748 08:43:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.748 08:43:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.748 08:43:13 -- common/autotest_common.sh@10 -- # set +x 00:06:36.748 ************************************ 00:06:36.748 START TEST app_cmdline 00:06:36.748 ************************************ 00:06:36.748 08:43:13 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:36.748 * Looking for test storage... 00:06:36.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:36.748 08:43:13 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:36.748 08:43:13 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:36.748 08:43:13 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.008 08:43:13 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.008 08:43:13 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.008 08:43:13 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.008 08:43:13 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.008 08:43:13 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.008 08:43:13 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.008 08:43:13 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.008 08:43:13 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.008 08:43:13 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.008 08:43:13 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.008 08:43:13 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.009 08:43:13 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:37.009 08:43:13 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.009 08:43:13 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.009 --rc genhtml_branch_coverage=1 00:06:37.009 --rc genhtml_function_coverage=1 00:06:37.009 --rc genhtml_legend=1 00:06:37.009 --rc geninfo_all_blocks=1 00:06:37.009 --rc geninfo_unexecuted_blocks=1 00:06:37.009 00:06:37.009 ' 00:06:37.009 08:43:13 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.009 --rc genhtml_branch_coverage=1 00:06:37.009 --rc genhtml_function_coverage=1 00:06:37.009 --rc genhtml_legend=1 00:06:37.009 --rc geninfo_all_blocks=1 00:06:37.009 --rc geninfo_unexecuted_blocks=1 00:06:37.009 00:06:37.009 ' 00:06:37.009 08:43:13 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.009 --rc genhtml_branch_coverage=1 00:06:37.009 --rc genhtml_function_coverage=1 00:06:37.009 --rc genhtml_legend=1 00:06:37.009 --rc geninfo_all_blocks=1 00:06:37.009 --rc geninfo_unexecuted_blocks=1 00:06:37.009 00:06:37.009 ' 00:06:37.009 08:43:13 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.009 --rc genhtml_branch_coverage=1 00:06:37.009 --rc genhtml_function_coverage=1 00:06:37.009 --rc genhtml_legend=1 00:06:37.009 --rc geninfo_all_blocks=1 00:06:37.009 --rc geninfo_unexecuted_blocks=1 00:06:37.009 00:06:37.009 ' 00:06:37.009 08:43:13 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:37.009 08:43:13 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59854 00:06:37.009 08:43:13 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:37.009 08:43:13 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59854 00:06:37.009 08:43:13 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59854 ']' 00:06:37.009 08:43:13 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.009 08:43:13 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.009 08:43:13 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.009 08:43:13 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.009 08:43:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.009 [2024-10-05 08:43:13.396175] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:37.009 [2024-10-05 08:43:13.396692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59854 ] 00:06:37.272 [2024-10-05 08:43:13.562044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.543 [2024-10-05 08:43:13.812138] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.481 08:43:14 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.481 08:43:14 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:38.481 08:43:14 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:38.742 { 00:06:38.742 "version": "SPDK v25.01-pre git sha1 3950cd1bb", 00:06:38.742 "fields": { 00:06:38.742 "major": 25, 00:06:38.742 "minor": 1, 00:06:38.742 "patch": 0, 00:06:38.742 "suffix": "-pre", 00:06:38.742 "commit": "3950cd1bb" 00:06:38.742 } 00:06:38.742 } 00:06:38.742 08:43:14 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:38.742 08:43:14 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:38.742 08:43:14 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:38.742 08:43:14 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:38.742 08:43:14 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:38.742 08:43:14 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:38.742 08:43:14 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:38.742 08:43:14 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.742 08:43:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:38.742 08:43:15 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.742 08:43:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:38.742 08:43:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:38.742 08:43:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:38.742 08:43:15 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:38.742 08:43:15 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:38.742 08:43:15 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.742 08:43:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.742 08:43:15 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.742 08:43:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.742 08:43:15 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.742 08:43:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.742 08:43:15 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:38.742 08:43:15 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:38.742 08:43:15 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.002 request: 00:06:39.002 { 00:06:39.002 "method": "env_dpdk_get_mem_stats", 00:06:39.002 "req_id": 1 00:06:39.002 } 00:06:39.002 Got JSON-RPC error response 00:06:39.002 response: 00:06:39.002 { 00:06:39.002 "code": -32601, 00:06:39.002 "message": "Method not found" 00:06:39.002 } 00:06:39.002 08:43:15 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:39.002 08:43:15 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.002 08:43:15 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.002 08:43:15 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.002 08:43:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59854 00:06:39.002 08:43:15 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59854 ']' 00:06:39.002 08:43:15 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59854 00:06:39.002 08:43:15 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:39.002 08:43:15 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.002 08:43:15 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59854 00:06:39.002 killing process with pid 59854 00:06:39.002 08:43:15 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.002 08:43:15 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.002 08:43:15 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59854' 00:06:39.003 08:43:15 app_cmdline -- common/autotest_common.sh@969 -- # kill 59854 00:06:39.003 08:43:15 app_cmdline -- common/autotest_common.sh@974 -- # wait 59854 00:06:41.544 00:06:41.544 real 0m4.869s 00:06:41.544 user 0m4.842s 00:06:41.544 sys 0m0.816s 00:06:41.544 08:43:17 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.544 08:43:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:41.544 ************************************ 00:06:41.544 END TEST app_cmdline 00:06:41.544 ************************************ 00:06:41.544 08:43:18 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:41.544 08:43:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.544 08:43:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.544 08:43:18 -- common/autotest_common.sh@10 -- # set +x 00:06:41.544 ************************************ 00:06:41.544 START TEST version 00:06:41.544 ************************************ 00:06:41.804 08:43:18 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:41.804 * Looking for test storage... 00:06:41.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:41.804 08:43:18 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.804 08:43:18 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.804 08:43:18 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.804 08:43:18 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.804 08:43:18 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.804 08:43:18 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.804 08:43:18 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.804 08:43:18 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.804 08:43:18 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.804 08:43:18 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.804 08:43:18 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.804 08:43:18 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.804 08:43:18 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.804 08:43:18 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.804 08:43:18 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.804 08:43:18 version -- scripts/common.sh@344 -- # case "$op" in 00:06:41.804 08:43:18 version -- scripts/common.sh@345 -- # : 1 00:06:41.804 08:43:18 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.804 08:43:18 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.804 08:43:18 version -- scripts/common.sh@365 -- # decimal 1 00:06:41.804 08:43:18 version -- scripts/common.sh@353 -- # local d=1 00:06:41.804 08:43:18 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.804 08:43:18 version -- scripts/common.sh@355 -- # echo 1 00:06:41.804 08:43:18 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.804 08:43:18 version -- scripts/common.sh@366 -- # decimal 2 00:06:41.804 08:43:18 version -- scripts/common.sh@353 -- # local d=2 00:06:41.804 08:43:18 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.804 08:43:18 version -- scripts/common.sh@355 -- # echo 2 00:06:41.804 08:43:18 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.804 08:43:18 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.804 08:43:18 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.804 08:43:18 version -- scripts/common.sh@368 -- # return 0 00:06:41.804 08:43:18 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.804 08:43:18 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.804 --rc genhtml_branch_coverage=1 00:06:41.804 --rc genhtml_function_coverage=1 00:06:41.804 --rc genhtml_legend=1 00:06:41.804 --rc geninfo_all_blocks=1 00:06:41.804 --rc geninfo_unexecuted_blocks=1 00:06:41.804 00:06:41.804 ' 00:06:41.804 08:43:18 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.804 --rc genhtml_branch_coverage=1 00:06:41.804 --rc genhtml_function_coverage=1 00:06:41.804 --rc genhtml_legend=1 00:06:41.804 --rc geninfo_all_blocks=1 00:06:41.804 --rc geninfo_unexecuted_blocks=1 00:06:41.804 00:06:41.804 ' 00:06:41.804 08:43:18 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.804 --rc genhtml_branch_coverage=1 00:06:41.804 --rc genhtml_function_coverage=1 00:06:41.804 --rc genhtml_legend=1 00:06:41.804 --rc geninfo_all_blocks=1 00:06:41.804 --rc geninfo_unexecuted_blocks=1 00:06:41.804 00:06:41.804 ' 00:06:41.804 08:43:18 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.804 --rc genhtml_branch_coverage=1 00:06:41.804 --rc genhtml_function_coverage=1 00:06:41.804 --rc genhtml_legend=1 00:06:41.804 --rc geninfo_all_blocks=1 00:06:41.804 --rc geninfo_unexecuted_blocks=1 00:06:41.804 00:06:41.804 ' 00:06:41.804 08:43:18 version -- app/version.sh@17 -- # get_header_version major 00:06:41.805 08:43:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.805 08:43:18 version -- app/version.sh@14 -- # cut -f2 00:06:41.805 08:43:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.805 08:43:18 version -- app/version.sh@17 -- # major=25 00:06:41.805 08:43:18 version -- app/version.sh@18 -- # get_header_version minor 00:06:41.805 08:43:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.805 08:43:18 version -- app/version.sh@14 -- # cut -f2 00:06:41.805 08:43:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:41.805 08:43:18 version -- app/version.sh@18 -- # minor=1 00:06:41.805 08:43:18 version -- app/version.sh@19 -- # get_header_version patch 00:06:41.805 08:43:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:41.805 08:43:18 version -- app/version.sh@14 -- # cut -f2 00:06:41.805 08:43:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.065 08:43:18 version -- app/version.sh@19 -- # patch=0 00:06:42.065 08:43:18 version -- app/version.sh@20 -- # get_header_version suffix 00:06:42.065 08:43:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:42.065 08:43:18 version -- app/version.sh@14 -- # cut -f2 00:06:42.065 08:43:18 version -- app/version.sh@14 -- # tr -d '"' 00:06:42.065 08:43:18 version -- app/version.sh@20 -- # suffix=-pre 00:06:42.065 08:43:18 version -- app/version.sh@22 -- # version=25.1 00:06:42.065 08:43:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:42.065 08:43:18 version -- app/version.sh@28 -- # version=25.1rc0 00:06:42.065 08:43:18 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:42.065 08:43:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:42.065 08:43:18 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:42.065 08:43:18 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:42.065 ************************************ 00:06:42.065 END TEST version 00:06:42.065 ************************************ 00:06:42.065 00:06:42.065 real 0m0.323s 00:06:42.065 user 0m0.177s 00:06:42.065 sys 0m0.199s 00:06:42.065 08:43:18 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.065 08:43:18 version -- common/autotest_common.sh@10 -- # set +x 00:06:42.065 08:43:18 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:42.065 08:43:18 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:42.065 08:43:18 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:42.065 08:43:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.065 08:43:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.065 08:43:18 -- common/autotest_common.sh@10 -- # set +x 00:06:42.065 ************************************ 00:06:42.065 START TEST bdev_raid 00:06:42.065 ************************************ 00:06:42.065 08:43:18 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:42.065 * Looking for test storage... 00:06:42.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:42.065 08:43:18 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:42.065 08:43:18 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:06:42.065 08:43:18 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:42.327 08:43:18 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:42.327 08:43:18 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.327 08:43:18 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.327 08:43:18 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.327 08:43:18 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.327 08:43:18 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.328 08:43:18 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:42.328 08:43:18 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.328 08:43:18 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:42.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.328 --rc genhtml_branch_coverage=1 00:06:42.328 --rc genhtml_function_coverage=1 00:06:42.328 --rc genhtml_legend=1 00:06:42.328 --rc geninfo_all_blocks=1 00:06:42.328 --rc geninfo_unexecuted_blocks=1 00:06:42.328 00:06:42.328 ' 00:06:42.328 08:43:18 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:42.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.328 --rc genhtml_branch_coverage=1 00:06:42.328 --rc genhtml_function_coverage=1 00:06:42.328 --rc genhtml_legend=1 00:06:42.328 --rc geninfo_all_blocks=1 00:06:42.328 --rc geninfo_unexecuted_blocks=1 00:06:42.328 00:06:42.328 ' 00:06:42.328 08:43:18 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:42.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.328 --rc genhtml_branch_coverage=1 00:06:42.328 --rc genhtml_function_coverage=1 00:06:42.328 --rc genhtml_legend=1 00:06:42.328 --rc geninfo_all_blocks=1 00:06:42.328 --rc geninfo_unexecuted_blocks=1 00:06:42.328 00:06:42.328 ' 00:06:42.328 08:43:18 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:42.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.328 --rc genhtml_branch_coverage=1 00:06:42.328 --rc genhtml_function_coverage=1 00:06:42.328 --rc genhtml_legend=1 00:06:42.328 --rc geninfo_all_blocks=1 00:06:42.328 --rc geninfo_unexecuted_blocks=1 00:06:42.328 00:06:42.328 ' 00:06:42.328 08:43:18 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:42.328 08:43:18 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:42.328 08:43:18 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:42.328 08:43:18 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:42.328 08:43:18 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:42.328 08:43:18 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:42.328 08:43:18 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:42.328 08:43:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.328 08:43:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.328 08:43:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:42.328 ************************************ 00:06:42.328 START TEST raid1_resize_data_offset_test 00:06:42.328 ************************************ 00:06:42.328 08:43:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:42.328 08:43:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60056 00:06:42.328 08:43:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60056' 00:06:42.328 Process raid pid: 60056 00:06:42.328 08:43:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60056 00:06:42.328 08:43:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:42.328 08:43:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 60056 ']' 00:06:42.328 08:43:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.328 08:43:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.328 08:43:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.328 08:43:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.328 08:43:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.328 [2024-10-05 08:43:18.750134] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:42.328 [2024-10-05 08:43:18.750357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.588 [2024-10-05 08:43:18.922241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.849 [2024-10-05 08:43:19.169714] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.109 [2024-10-05 08:43:19.404938] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.109 [2024-10-05 08:43:19.405111] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.109 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.109 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:43.109 08:43:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:43.109 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.109 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.369 malloc0 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.369 malloc1 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.369 null0 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.369 [2024-10-05 08:43:19.791481] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:43.369 [2024-10-05 08:43:19.793460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:43.369 [2024-10-05 08:43:19.793506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:43.369 [2024-10-05 08:43:19.793645] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:43.369 [2024-10-05 08:43:19.793656] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:43.369 [2024-10-05 08:43:19.793905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:43.369 [2024-10-05 08:43:19.794074] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:43.369 [2024-10-05 08:43:19.794088] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:43.369 [2024-10-05 08:43:19.794233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.369 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.629 08:43:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:43.629 08:43:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:43.629 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.629 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.629 [2024-10-05 08:43:19.851314] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:43.629 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.629 08:43:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:43.629 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.629 08:43:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.198 malloc2 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.198 [2024-10-05 08:43:20.466560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:44.198 [2024-10-05 08:43:20.484559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.198 [2024-10-05 08:43:20.486734] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60056 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 60056 ']' 00:06:44.198 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 60056 00:06:44.199 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:44.199 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.199 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60056 00:06:44.199 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.199 killing process with pid 60056 00:06:44.199 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.199 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60056' 00:06:44.199 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 60056 00:06:44.199 [2024-10-05 08:43:20.577166] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:44.199 08:43:20 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 60056 00:06:44.199 [2024-10-05 08:43:20.578582] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:44.199 [2024-10-05 08:43:20.578649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.199 [2024-10-05 08:43:20.578667] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:44.199 [2024-10-05 08:43:20.606650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:44.199 [2024-10-05 08:43:20.607030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:44.199 [2024-10-05 08:43:20.607058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:46.115 [2024-10-05 08:43:22.475116] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:47.497 ************************************ 00:06:47.497 END TEST raid1_resize_data_offset_test 00:06:47.497 ************************************ 00:06:47.497 08:43:23 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:47.497 00:06:47.497 real 0m5.152s 00:06:47.497 user 0m4.790s 00:06:47.497 sys 0m0.773s 00:06:47.497 08:43:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.497 08:43:23 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.497 08:43:23 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:47.497 08:43:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:47.497 08:43:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.497 08:43:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:47.497 ************************************ 00:06:47.497 START TEST raid0_resize_superblock_test 00:06:47.497 ************************************ 00:06:47.497 08:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:47.497 08:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:47.497 08:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60145 00:06:47.497 Process raid pid: 60145 00:06:47.497 08:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:47.497 08:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60145' 00:06:47.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.497 08:43:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60145 00:06:47.497 08:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60145 ']' 00:06:47.497 08:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.497 08:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.497 08:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.497 08:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.497 08:43:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.497 [2024-10-05 08:43:23.962817] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:47.497 [2024-10-05 08:43:23.963431] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.758 [2024-10-05 08:43:24.129143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.018 [2024-10-05 08:43:24.371891] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.278 [2024-10-05 08:43:24.611249] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.278 [2024-10-05 08:43:24.611381] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.539 08:43:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.539 08:43:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:48.539 08:43:24 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:48.539 08:43:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.539 08:43:24 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.109 malloc0 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.109 [2024-10-05 08:43:25.381421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:49.109 [2024-10-05 08:43:25.381507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:49.109 [2024-10-05 08:43:25.381529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:49.109 [2024-10-05 08:43:25.381541] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:49.109 [2024-10-05 08:43:25.383797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:49.109 [2024-10-05 08:43:25.383837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:49.109 pt0 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.109 81c0d645-9b79-4804-a406-d7afecd588f8 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.109 712c95f2-cd9c-4ecf-a7e2-3393083fef51 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.109 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.370 1de4b9fa-34c3-4567-aba5-d7486f2373b7 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.370 [2024-10-05 08:43:25.587999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 712c95f2-cd9c-4ecf-a7e2-3393083fef51 is claimed 00:06:49.370 [2024-10-05 08:43:25.588096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1de4b9fa-34c3-4567-aba5-d7486f2373b7 is claimed 00:06:49.370 [2024-10-05 08:43:25.588213] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:49.370 [2024-10-05 08:43:25.588230] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:49.370 [2024-10-05 08:43:25.588479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:49.370 [2024-10-05 08:43:25.588669] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:49.370 [2024-10-05 08:43:25.588681] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:49.370 [2024-10-05 08:43:25.588841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.370 [2024-10-05 08:43:25.679968] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.370 [2024-10-05 08:43:25.711853] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:49.370 [2024-10-05 08:43:25.711878] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '712c95f2-cd9c-4ecf-a7e2-3393083fef51' was resized: old size 131072, new size 204800 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.370 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.370 [2024-10-05 08:43:25.723817] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:49.370 [2024-10-05 08:43:25.723838] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '1de4b9fa-34c3-4567-aba5-d7486f2373b7' was resized: old size 131072, new size 204800 00:06:49.370 [2024-10-05 08:43:25.723863] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.371 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.371 [2024-10-05 08:43:25.839709] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.633 [2024-10-05 08:43:25.883421] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:49.633 [2024-10-05 08:43:25.883479] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:49.633 [2024-10-05 08:43:25.883505] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:49.633 [2024-10-05 08:43:25.883519] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:49.633 [2024-10-05 08:43:25.883602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.633 [2024-10-05 08:43:25.883634] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.633 [2024-10-05 08:43:25.883645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.633 [2024-10-05 08:43:25.895366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:49.633 [2024-10-05 08:43:25.895415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:49.633 [2024-10-05 08:43:25.895434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:49.633 [2024-10-05 08:43:25.895444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:49.633 [2024-10-05 08:43:25.897736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:49.633 [2024-10-05 08:43:25.897772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:49.633 [2024-10-05 08:43:25.899392] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 712c95f2-cd9c-4ecf-a7e2-3393083fef51 00:06:49.633 [2024-10-05 08:43:25.899514] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 712c95f2-cd9c-4ecf-a7e2-3393083fef51 is claimed 00:06:49.633 [2024-10-05 08:43:25.899621] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 1de4b9fa-34c3-4567-aba5-d7486f2373b7 00:06:49.633 [2024-10-05 08:43:25.899640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 1de4b9fa-34c3-4567-aba5-d7486f2373b7 is claimed 00:06:49.633 [2024-10-05 08:43:25.899772] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 1de4b9fa-34c3-4567-aba5-d7486f2373b7 (2) smaller than existing raid bdev Raid (3) 00:06:49.633 [2024-10-05 08:43:25.899793] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 712c95f2-cd9c-4ecf-a7e2-3393083fef51: File exists 00:06:49.633 [2024-10-05 08:43:25.899827] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:49.633 [2024-10-05 08:43:25.899839] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:49.633 [2024-10-05 08:43:25.900089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:49.633 pt0 00:06:49.633 [2024-10-05 08:43:25.900226] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:49.633 [2024-10-05 08:43:25.900241] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:49.633 [2024-10-05 08:43:25.900377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.633 [2024-10-05 08:43:25.923644] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60145 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60145 ']' 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60145 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60145 00:06:49.633 killing process with pid 60145 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60145' 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60145 00:06:49.633 [2024-10-05 08:43:25.993748] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:49.633 [2024-10-05 08:43:25.993797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.633 [2024-10-05 08:43:25.993830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.633 [2024-10-05 08:43:25.993837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:49.633 08:43:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60145 00:06:51.013 [2024-10-05 08:43:27.482404] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:52.393 08:43:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:52.393 00:06:52.393 real 0m4.922s 00:06:52.393 user 0m4.906s 00:06:52.393 sys 0m0.745s 00:06:52.393 08:43:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.393 ************************************ 00:06:52.393 END TEST raid0_resize_superblock_test 00:06:52.393 ************************************ 00:06:52.393 08:43:28 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.393 08:43:28 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:52.393 08:43:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:52.393 08:43:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.393 08:43:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:52.393 ************************************ 00:06:52.393 START TEST raid1_resize_superblock_test 00:06:52.393 ************************************ 00:06:52.393 08:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:52.653 08:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:52.653 08:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60245 00:06:52.653 Process raid pid: 60245 00:06:52.653 08:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:52.653 08:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60245' 00:06:52.653 08:43:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60245 00:06:52.653 08:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60245 ']' 00:06:52.653 08:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.653 08:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.653 08:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.653 08:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.653 08:43:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.653 [2024-10-05 08:43:28.947432] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:52.653 [2024-10-05 08:43:28.947543] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.653 [2024-10-05 08:43:29.113638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.913 [2024-10-05 08:43:29.353658] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.173 [2024-10-05 08:43:29.575712] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.173 [2024-10-05 08:43:29.575754] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.433 08:43:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.433 08:43:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:53.433 08:43:29 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:53.433 08:43:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.433 08:43:29 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.004 malloc0 00:06:54.004 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.004 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:54.004 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.004 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.004 [2024-10-05 08:43:30.395749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:54.004 [2024-10-05 08:43:30.395833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:54.004 [2024-10-05 08:43:30.395858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:54.004 [2024-10-05 08:43:30.395869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:54.004 [2024-10-05 08:43:30.398215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:54.004 [2024-10-05 08:43:30.398254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:54.004 pt0 00:06:54.004 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.004 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:54.004 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.004 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.287 868b8af6-a26b-42b4-a198-6fa7ba6e7940 00:06:54.287 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.287 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:54.287 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.287 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.287 9a484db5-eac7-4f99-a4d6-7ada35bd64ba 00:06:54.287 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.287 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:54.287 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.287 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.287 2152094d-c26a-4382-ab94-109df971d845 00:06:54.287 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.287 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:54.287 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:54.287 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.288 [2024-10-05 08:43:30.602785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9a484db5-eac7-4f99-a4d6-7ada35bd64ba is claimed 00:06:54.288 [2024-10-05 08:43:30.602891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2152094d-c26a-4382-ab94-109df971d845 is claimed 00:06:54.288 [2024-10-05 08:43:30.603027] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:54.288 [2024-10-05 08:43:30.603045] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:54.288 [2024-10-05 08:43:30.603307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:54.288 [2024-10-05 08:43:30.603492] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:54.288 [2024-10-05 08:43:30.603509] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:54.288 [2024-10-05 08:43:30.603666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.288 [2024-10-05 08:43:30.714735] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.288 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.563 [2024-10-05 08:43:30.758604] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.563 [2024-10-05 08:43:30.758669] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9a484db5-eac7-4f99-a4d6-7ada35bd64ba' was resized: old size 131072, new size 204800 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.563 [2024-10-05 08:43:30.770552] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.563 [2024-10-05 08:43:30.770576] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2152094d-c26a-4382-ab94-109df971d845' was resized: old size 131072, new size 204800 00:06:54.563 [2024-10-05 08:43:30.770602] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.563 [2024-10-05 08:43:30.874447] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.563 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.563 [2024-10-05 08:43:30.902212] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:54.563 [2024-10-05 08:43:30.902271] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:54.563 [2024-10-05 08:43:30.902305] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:54.564 [2024-10-05 08:43:30.902422] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:54.564 [2024-10-05 08:43:30.902563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.564 [2024-10-05 08:43:30.902619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.564 [2024-10-05 08:43:30.902635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.564 [2024-10-05 08:43:30.914160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:54.564 [2024-10-05 08:43:30.914212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:54.564 [2024-10-05 08:43:30.914230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:54.564 [2024-10-05 08:43:30.914240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:54.564 [2024-10-05 08:43:30.916535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:54.564 [2024-10-05 08:43:30.916572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:54.564 [2024-10-05 08:43:30.918122] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9a484db5-eac7-4f99-a4d6-7ada35bd64ba 00:06:54.564 [2024-10-05 08:43:30.918179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9a484db5-eac7-4f99-a4d6-7ada35bd64ba is claimed 00:06:54.564 [2024-10-05 08:43:30.918277] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2152094d-c26a-4382-ab94-109df971d845 00:06:54.564 [2024-10-05 08:43:30.918294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2152094d-c26a-4382-ab94-109df971d845 is claimed 00:06:54.564 [2024-10-05 08:43:30.918445] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2152094d-c26a-4382-ab94-109df971d845 (2) smaller than existing raid bdev Raid (3) 00:06:54.564 [2024-10-05 08:43:30.918467] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 9a484db5-eac7-4f99-a4d6-7ada35bd64ba: File exists 00:06:54.564 [2024-10-05 08:43:30.918498] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:54.564 [2024-10-05 08:43:30.918510] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:54.564 [2024-10-05 08:43:30.918746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:54.564 pt0 00:06:54.564 [2024-10-05 08:43:30.918875] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:54.564 [2024-10-05 08:43:30.918883] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:54.564 [2024-10-05 08:43:30.919055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.564 [2024-10-05 08:43:30.942369] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60245 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60245 ']' 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60245 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.564 08:43:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60245 00:06:54.564 killing process with pid 60245 00:06:54.564 08:43:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.564 08:43:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.564 08:43:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60245' 00:06:54.564 08:43:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60245 00:06:54.564 [2024-10-05 08:43:31.024619] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.564 [2024-10-05 08:43:31.024669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.564 [2024-10-05 08:43:31.024704] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.564 [2024-10-05 08:43:31.024712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:54.564 08:43:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60245 00:06:56.474 [2024-10-05 08:43:32.520010] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.412 ************************************ 00:06:57.412 END TEST raid1_resize_superblock_test 00:06:57.412 ************************************ 00:06:57.412 08:43:33 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:57.412 00:06:57.412 real 0m4.967s 00:06:57.412 user 0m4.995s 00:06:57.412 sys 0m0.718s 00:06:57.412 08:43:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.412 08:43:33 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.672 08:43:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:57.672 08:43:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:57.672 08:43:33 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:57.672 08:43:33 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:57.672 08:43:33 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:57.672 08:43:33 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:57.672 08:43:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:57.672 08:43:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.672 08:43:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.672 ************************************ 00:06:57.672 START TEST raid_function_test_raid0 00:06:57.672 ************************************ 00:06:57.672 08:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:57.672 08:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:57.672 08:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:57.672 08:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:57.673 08:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60318 00:06:57.673 08:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:57.673 08:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60318' 00:06:57.673 Process raid pid: 60318 00:06:57.673 08:43:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60318 00:06:57.673 08:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60318 ']' 00:06:57.673 08:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.673 08:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.673 08:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.673 08:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.673 08:43:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:57.673 [2024-10-05 08:43:34.008784] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:06:57.673 [2024-10-05 08:43:34.009022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.932 [2024-10-05 08:43:34.176921] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.192 [2024-10-05 08:43:34.423094] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.193 [2024-10-05 08:43:34.643665] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.193 [2024-10-05 08:43:34.643802] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.452 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.452 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:58.452 08:43:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:58.452 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.452 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:58.713 Base_1 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:58.713 Base_2 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:58.713 [2024-10-05 08:43:34.986438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:58.713 [2024-10-05 08:43:34.988480] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:58.713 [2024-10-05 08:43:34.988550] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:58.713 [2024-10-05 08:43:34.988563] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:58.713 [2024-10-05 08:43:34.988812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:58.713 [2024-10-05 08:43:34.988982] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:58.713 [2024-10-05 08:43:34.988992] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:58.713 [2024-10-05 08:43:34.989154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:58.713 08:43:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:58.713 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:58.973 [2024-10-05 08:43:35.238093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:58.973 /dev/nbd0 00:06:58.973 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.973 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.973 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:58.973 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:58.973 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:58.973 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:58.973 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:58.973 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:58.973 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:58.973 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:58.973 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.973 1+0 records in 00:06:58.973 1+0 records out 00:06:58.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606131 s, 6.8 MB/s 00:06:58.973 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.974 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:58.974 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.974 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:58.974 08:43:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:58.974 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.974 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:58.974 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:58.974 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:58.974 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.234 { 00:06:59.234 "nbd_device": "/dev/nbd0", 00:06:59.234 "bdev_name": "raid" 00:06:59.234 } 00:06:59.234 ]' 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.234 { 00:06:59.234 "nbd_device": "/dev/nbd0", 00:06:59.234 "bdev_name": "raid" 00:06:59.234 } 00:06:59.234 ]' 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:59.234 4096+0 records in 00:06:59.234 4096+0 records out 00:06:59.234 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0351703 s, 59.6 MB/s 00:06:59.234 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:59.494 4096+0 records in 00:06:59.494 4096+0 records out 00:06:59.494 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.19243 s, 10.9 MB/s 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:59.494 128+0 records in 00:06:59.494 128+0 records out 00:06:59.494 65536 bytes (66 kB, 64 KiB) copied, 0.00122727 s, 53.4 MB/s 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:59.494 2035+0 records in 00:06:59.494 2035+0 records out 00:06:59.494 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.015366 s, 67.8 MB/s 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:59.494 456+0 records in 00:06:59.494 456+0 records out 00:06:59.494 233472 bytes (233 kB, 228 KiB) copied, 0.00403016 s, 57.9 MB/s 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.494 08:43:35 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:59.755 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.755 [2024-10-05 08:43:36.157440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.755 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.755 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.755 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.755 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.755 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.755 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:59.755 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.755 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:59.755 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:59.755 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60318 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60318 ']' 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60318 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60318 00:07:00.043 killing process with pid 60318 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60318' 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60318 00:07:00.043 [2024-10-05 08:43:36.467968] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:00.043 [2024-10-05 08:43:36.468085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.043 08:43:36 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60318 00:07:00.043 [2024-10-05 08:43:36.468139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:00.043 [2024-10-05 08:43:36.468152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:00.304 [2024-10-05 08:43:36.685868] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:01.687 08:43:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:01.687 00:07:01.687 real 0m4.082s 00:07:01.687 user 0m4.498s 00:07:01.687 sys 0m1.128s 00:07:01.687 08:43:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.687 ************************************ 00:07:01.687 END TEST raid_function_test_raid0 00:07:01.687 ************************************ 00:07:01.687 08:43:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:01.687 08:43:38 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:01.687 08:43:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:01.687 08:43:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.687 08:43:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:01.687 ************************************ 00:07:01.687 START TEST raid_function_test_concat 00:07:01.687 ************************************ 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60423 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60423' 00:07:01.687 Process raid pid: 60423 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60423 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60423 ']' 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.687 08:43:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:01.947 [2024-10-05 08:43:38.167080] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:01.947 [2024-10-05 08:43:38.167298] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.947 [2024-10-05 08:43:38.333534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.207 [2024-10-05 08:43:38.578498] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.466 [2024-10-05 08:43:38.807516] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.466 [2024-10-05 08:43:38.807554] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.735 08:43:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.735 08:43:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:02.735 08:43:38 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:02.735 08:43:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.735 08:43:38 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:02.735 Base_1 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:02.735 Base_2 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:02.735 [2024-10-05 08:43:39.101818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:02.735 [2024-10-05 08:43:39.103816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:02.735 [2024-10-05 08:43:39.103882] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:02.735 [2024-10-05 08:43:39.103894] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:02.735 [2024-10-05 08:43:39.104150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:02.735 [2024-10-05 08:43:39.104334] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:02.735 [2024-10-05 08:43:39.104345] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:02.735 [2024-10-05 08:43:39.104503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:02.735 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:03.006 [2024-10-05 08:43:39.341365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:03.006 /dev/nbd0 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.006 1+0 records in 00:07:03.006 1+0 records out 00:07:03.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704211 s, 5.8 MB/s 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:03.006 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:03.007 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:03.266 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:03.266 { 00:07:03.266 "nbd_device": "/dev/nbd0", 00:07:03.266 "bdev_name": "raid" 00:07:03.266 } 00:07:03.266 ]' 00:07:03.266 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:03.266 { 00:07:03.266 "nbd_device": "/dev/nbd0", 00:07:03.266 "bdev_name": "raid" 00:07:03.266 } 00:07:03.267 ]' 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:03.267 4096+0 records in 00:07:03.267 4096+0 records out 00:07:03.267 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0216156 s, 97.0 MB/s 00:07:03.267 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:03.527 4096+0 records in 00:07:03.527 4096+0 records out 00:07:03.527 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.210981 s, 9.9 MB/s 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:03.527 128+0 records in 00:07:03.527 128+0 records out 00:07:03.527 65536 bytes (66 kB, 64 KiB) copied, 0.00128521 s, 51.0 MB/s 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:03.527 08:43:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:03.787 2035+0 records in 00:07:03.787 2035+0 records out 00:07:03.787 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0156458 s, 66.6 MB/s 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:03.787 456+0 records in 00:07:03.787 456+0 records out 00:07:03.787 233472 bytes (233 kB, 228 KiB) copied, 0.00299811 s, 77.9 MB/s 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.787 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.047 [2024-10-05 08:43:40.267158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.047 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60423 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60423 ']' 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60423 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60423 00:07:04.307 killing process with pid 60423 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60423' 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60423 00:07:04.307 [2024-10-05 08:43:40.584512] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:04.307 08:43:40 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60423 00:07:04.307 [2024-10-05 08:43:40.584631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:04.307 [2024-10-05 08:43:40.584686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:04.307 [2024-10-05 08:43:40.584698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:04.567 [2024-10-05 08:43:40.804877] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.948 08:43:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:05.948 00:07:05.948 real 0m4.041s 00:07:05.948 user 0m4.504s 00:07:05.948 sys 0m1.049s 00:07:05.948 08:43:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.948 ************************************ 00:07:05.948 END TEST raid_function_test_concat 00:07:05.948 ************************************ 00:07:05.948 08:43:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:05.948 08:43:42 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:05.948 08:43:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:05.948 08:43:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.948 08:43:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.948 ************************************ 00:07:05.948 START TEST raid0_resize_test 00:07:05.948 ************************************ 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:05.948 Process raid pid: 60528 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60528 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60528' 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60528 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60528 ']' 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.948 08:43:42 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.948 [2024-10-05 08:43:42.270766] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:05.948 [2024-10-05 08:43:42.270992] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.209 [2024-10-05 08:43:42.437102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.469 [2024-10-05 08:43:42.692517] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.469 [2024-10-05 08:43:42.928513] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.469 [2024-10-05 08:43:42.928651] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 Base_1 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 Base_2 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 [2024-10-05 08:43:43.141585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:06.728 [2024-10-05 08:43:43.143485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:06.728 [2024-10-05 08:43:43.143626] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:06.728 [2024-10-05 08:43:43.143642] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:06.728 [2024-10-05 08:43:43.143858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:06.728 [2024-10-05 08:43:43.144003] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:06.728 [2024-10-05 08:43:43.144015] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:06.728 [2024-10-05 08:43:43.144139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 [2024-10-05 08:43:43.153521] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:06.728 [2024-10-05 08:43:43.153549] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:06.728 true 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.728 [2024-10-05 08:43:43.169615] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.728 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.988 [2024-10-05 08:43:43.213397] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:06.988 [2024-10-05 08:43:43.213419] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:06.988 [2024-10-05 08:43:43.213447] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:06.988 true 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.988 [2024-10-05 08:43:43.225505] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60528 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60528 ']' 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60528 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60528 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60528' 00:07:06.988 killing process with pid 60528 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60528 00:07:06.988 [2024-10-05 08:43:43.311326] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.988 [2024-10-05 08:43:43.311444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.988 [2024-10-05 08:43:43.311510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.988 08:43:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60528 00:07:06.988 [2024-10-05 08:43:43.311567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:06.988 [2024-10-05 08:43:43.328019] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.369 08:43:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:08.369 00:07:08.369 real 0m2.458s 00:07:08.369 user 0m2.487s 00:07:08.369 sys 0m0.460s 00:07:08.369 08:43:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.369 08:43:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.369 ************************************ 00:07:08.369 END TEST raid0_resize_test 00:07:08.369 ************************************ 00:07:08.369 08:43:44 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:08.369 08:43:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:08.369 08:43:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.369 08:43:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.369 ************************************ 00:07:08.369 START TEST raid1_resize_test 00:07:08.369 ************************************ 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60572 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60572' 00:07:08.369 Process raid pid: 60572 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60572 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60572 ']' 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.369 08:43:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.369 [2024-10-05 08:43:44.797313] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:08.369 [2024-10-05 08:43:44.797509] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.629 [2024-10-05 08:43:44.963730] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.888 [2024-10-05 08:43:45.216299] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.146 [2024-10-05 08:43:45.451655] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.146 [2024-10-05 08:43:45.451811] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.407 Base_1 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.407 Base_2 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.407 [2024-10-05 08:43:45.675555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:09.407 [2024-10-05 08:43:45.677586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:09.407 [2024-10-05 08:43:45.677681] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:09.407 [2024-10-05 08:43:45.677714] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:09.407 [2024-10-05 08:43:45.677981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:09.407 [2024-10-05 08:43:45.678151] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:09.407 [2024-10-05 08:43:45.678196] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:09.407 [2024-10-05 08:43:45.678371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.407 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.408 [2024-10-05 08:43:45.687482] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:09.408 [2024-10-05 08:43:45.687547] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:09.408 true 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.408 [2024-10-05 08:43:45.703581] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.408 [2024-10-05 08:43:45.747372] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:09.408 [2024-10-05 08:43:45.747430] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:09.408 [2024-10-05 08:43:45.747484] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:09.408 true 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.408 [2024-10-05 08:43:45.763469] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60572 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60572 ']' 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60572 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60572 00:07:09.408 killing process with pid 60572 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60572' 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60572 00:07:09.408 [2024-10-05 08:43:45.826568] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.408 [2024-10-05 08:43:45.826642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.408 08:43:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60572 00:07:09.408 [2024-10-05 08:43:45.827097] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.408 [2024-10-05 08:43:45.827158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:09.408 [2024-10-05 08:43:45.844643] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.789 08:43:47 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:10.789 00:07:10.789 real 0m2.440s 00:07:10.789 user 0m2.470s 00:07:10.789 sys 0m0.443s 00:07:10.789 ************************************ 00:07:10.789 END TEST raid1_resize_test 00:07:10.789 ************************************ 00:07:10.789 08:43:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.789 08:43:47 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.789 08:43:47 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:10.789 08:43:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:10.789 08:43:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:10.789 08:43:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:10.789 08:43:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.789 08:43:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.789 ************************************ 00:07:10.789 START TEST raid_state_function_test 00:07:10.789 ************************************ 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:10.789 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60622 00:07:10.790 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:10.790 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60622' 00:07:10.790 Process raid pid: 60622 00:07:10.790 08:43:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60622 00:07:10.790 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60622 ']' 00:07:10.790 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.790 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.790 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.790 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.790 08:43:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.049 [2024-10-05 08:43:47.322701] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:11.049 [2024-10-05 08:43:47.322948] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.049 [2024-10-05 08:43:47.492834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.328 [2024-10-05 08:43:47.726752] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.604 [2024-10-05 08:43:47.967249] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.604 [2024-10-05 08:43:47.967388] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.865 [2024-10-05 08:43:48.140807] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.865 [2024-10-05 08:43:48.140873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.865 [2024-10-05 08:43:48.140882] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.865 [2024-10-05 08:43:48.140893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.865 "name": "Existed_Raid", 00:07:11.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.865 "strip_size_kb": 64, 00:07:11.865 "state": "configuring", 00:07:11.865 "raid_level": "raid0", 00:07:11.865 "superblock": false, 00:07:11.865 "num_base_bdevs": 2, 00:07:11.865 "num_base_bdevs_discovered": 0, 00:07:11.865 "num_base_bdevs_operational": 2, 00:07:11.865 "base_bdevs_list": [ 00:07:11.865 { 00:07:11.865 "name": "BaseBdev1", 00:07:11.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.865 "is_configured": false, 00:07:11.865 "data_offset": 0, 00:07:11.865 "data_size": 0 00:07:11.865 }, 00:07:11.865 { 00:07:11.865 "name": "BaseBdev2", 00:07:11.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.865 "is_configured": false, 00:07:11.865 "data_offset": 0, 00:07:11.865 "data_size": 0 00:07:11.865 } 00:07:11.865 ] 00:07:11.865 }' 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.865 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.126 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:12.126 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.126 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.126 [2024-10-05 08:43:48.540035] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:12.126 [2024-10-05 08:43:48.540144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:12.126 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.126 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:12.126 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.126 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.126 [2024-10-05 08:43:48.552047] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:12.126 [2024-10-05 08:43:48.552125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:12.126 [2024-10-05 08:43:48.552151] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.126 [2024-10-05 08:43:48.552176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.126 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.126 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:12.126 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.126 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.386 [2024-10-05 08:43:48.614421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.386 BaseBdev1 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.386 [ 00:07:12.386 { 00:07:12.386 "name": "BaseBdev1", 00:07:12.386 "aliases": [ 00:07:12.386 "78e43447-3d8d-4128-8ffe-71ae355a3796" 00:07:12.386 ], 00:07:12.386 "product_name": "Malloc disk", 00:07:12.386 "block_size": 512, 00:07:12.386 "num_blocks": 65536, 00:07:12.386 "uuid": "78e43447-3d8d-4128-8ffe-71ae355a3796", 00:07:12.386 "assigned_rate_limits": { 00:07:12.386 "rw_ios_per_sec": 0, 00:07:12.386 "rw_mbytes_per_sec": 0, 00:07:12.386 "r_mbytes_per_sec": 0, 00:07:12.386 "w_mbytes_per_sec": 0 00:07:12.386 }, 00:07:12.386 "claimed": true, 00:07:12.386 "claim_type": "exclusive_write", 00:07:12.386 "zoned": false, 00:07:12.386 "supported_io_types": { 00:07:12.386 "read": true, 00:07:12.386 "write": true, 00:07:12.386 "unmap": true, 00:07:12.386 "flush": true, 00:07:12.386 "reset": true, 00:07:12.386 "nvme_admin": false, 00:07:12.386 "nvme_io": false, 00:07:12.386 "nvme_io_md": false, 00:07:12.386 "write_zeroes": true, 00:07:12.386 "zcopy": true, 00:07:12.386 "get_zone_info": false, 00:07:12.386 "zone_management": false, 00:07:12.386 "zone_append": false, 00:07:12.386 "compare": false, 00:07:12.386 "compare_and_write": false, 00:07:12.386 "abort": true, 00:07:12.386 "seek_hole": false, 00:07:12.386 "seek_data": false, 00:07:12.386 "copy": true, 00:07:12.386 "nvme_iov_md": false 00:07:12.386 }, 00:07:12.386 "memory_domains": [ 00:07:12.386 { 00:07:12.386 "dma_device_id": "system", 00:07:12.386 "dma_device_type": 1 00:07:12.386 }, 00:07:12.386 { 00:07:12.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.386 "dma_device_type": 2 00:07:12.386 } 00:07:12.386 ], 00:07:12.386 "driver_specific": {} 00:07:12.386 } 00:07:12.386 ] 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.386 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.387 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.387 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.387 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.387 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.387 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.387 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.387 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.387 "name": "Existed_Raid", 00:07:12.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.387 "strip_size_kb": 64, 00:07:12.387 "state": "configuring", 00:07:12.387 "raid_level": "raid0", 00:07:12.387 "superblock": false, 00:07:12.387 "num_base_bdevs": 2, 00:07:12.387 "num_base_bdevs_discovered": 1, 00:07:12.387 "num_base_bdevs_operational": 2, 00:07:12.387 "base_bdevs_list": [ 00:07:12.387 { 00:07:12.387 "name": "BaseBdev1", 00:07:12.387 "uuid": "78e43447-3d8d-4128-8ffe-71ae355a3796", 00:07:12.387 "is_configured": true, 00:07:12.387 "data_offset": 0, 00:07:12.387 "data_size": 65536 00:07:12.387 }, 00:07:12.387 { 00:07:12.387 "name": "BaseBdev2", 00:07:12.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.387 "is_configured": false, 00:07:12.387 "data_offset": 0, 00:07:12.387 "data_size": 0 00:07:12.387 } 00:07:12.387 ] 00:07:12.387 }' 00:07:12.387 08:43:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.387 08:43:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.648 [2024-10-05 08:43:49.005771] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:12.648 [2024-10-05 08:43:49.005812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.648 [2024-10-05 08:43:49.013796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.648 [2024-10-05 08:43:49.015712] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.648 [2024-10-05 08:43:49.015751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.648 "name": "Existed_Raid", 00:07:12.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.648 "strip_size_kb": 64, 00:07:12.648 "state": "configuring", 00:07:12.648 "raid_level": "raid0", 00:07:12.648 "superblock": false, 00:07:12.648 "num_base_bdevs": 2, 00:07:12.648 "num_base_bdevs_discovered": 1, 00:07:12.648 "num_base_bdevs_operational": 2, 00:07:12.648 "base_bdevs_list": [ 00:07:12.648 { 00:07:12.648 "name": "BaseBdev1", 00:07:12.648 "uuid": "78e43447-3d8d-4128-8ffe-71ae355a3796", 00:07:12.648 "is_configured": true, 00:07:12.648 "data_offset": 0, 00:07:12.648 "data_size": 65536 00:07:12.648 }, 00:07:12.648 { 00:07:12.648 "name": "BaseBdev2", 00:07:12.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.648 "is_configured": false, 00:07:12.648 "data_offset": 0, 00:07:12.648 "data_size": 0 00:07:12.648 } 00:07:12.648 ] 00:07:12.648 }' 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.648 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.219 [2024-10-05 08:43:49.481794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:13.219 [2024-10-05 08:43:49.481911] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:13.219 [2024-10-05 08:43:49.481925] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:13.219 [2024-10-05 08:43:49.482269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:13.219 [2024-10-05 08:43:49.482439] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:13.219 [2024-10-05 08:43:49.482457] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:13.219 [2024-10-05 08:43:49.482730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.219 BaseBdev2 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.219 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.219 [ 00:07:13.219 { 00:07:13.219 "name": "BaseBdev2", 00:07:13.219 "aliases": [ 00:07:13.219 "93677f01-ed6b-4661-a34e-80d8e968ee24" 00:07:13.219 ], 00:07:13.219 "product_name": "Malloc disk", 00:07:13.219 "block_size": 512, 00:07:13.219 "num_blocks": 65536, 00:07:13.219 "uuid": "93677f01-ed6b-4661-a34e-80d8e968ee24", 00:07:13.219 "assigned_rate_limits": { 00:07:13.219 "rw_ios_per_sec": 0, 00:07:13.219 "rw_mbytes_per_sec": 0, 00:07:13.219 "r_mbytes_per_sec": 0, 00:07:13.219 "w_mbytes_per_sec": 0 00:07:13.219 }, 00:07:13.219 "claimed": true, 00:07:13.219 "claim_type": "exclusive_write", 00:07:13.219 "zoned": false, 00:07:13.220 "supported_io_types": { 00:07:13.220 "read": true, 00:07:13.220 "write": true, 00:07:13.220 "unmap": true, 00:07:13.220 "flush": true, 00:07:13.220 "reset": true, 00:07:13.220 "nvme_admin": false, 00:07:13.220 "nvme_io": false, 00:07:13.220 "nvme_io_md": false, 00:07:13.220 "write_zeroes": true, 00:07:13.220 "zcopy": true, 00:07:13.220 "get_zone_info": false, 00:07:13.220 "zone_management": false, 00:07:13.220 "zone_append": false, 00:07:13.220 "compare": false, 00:07:13.220 "compare_and_write": false, 00:07:13.220 "abort": true, 00:07:13.220 "seek_hole": false, 00:07:13.220 "seek_data": false, 00:07:13.220 "copy": true, 00:07:13.220 "nvme_iov_md": false 00:07:13.220 }, 00:07:13.220 "memory_domains": [ 00:07:13.220 { 00:07:13.220 "dma_device_id": "system", 00:07:13.220 "dma_device_type": 1 00:07:13.220 }, 00:07:13.220 { 00:07:13.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.220 "dma_device_type": 2 00:07:13.220 } 00:07:13.220 ], 00:07:13.220 "driver_specific": {} 00:07:13.220 } 00:07:13.220 ] 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.220 "name": "Existed_Raid", 00:07:13.220 "uuid": "702ac7b2-fe2f-4115-88ea-6fedfadc4393", 00:07:13.220 "strip_size_kb": 64, 00:07:13.220 "state": "online", 00:07:13.220 "raid_level": "raid0", 00:07:13.220 "superblock": false, 00:07:13.220 "num_base_bdevs": 2, 00:07:13.220 "num_base_bdevs_discovered": 2, 00:07:13.220 "num_base_bdevs_operational": 2, 00:07:13.220 "base_bdevs_list": [ 00:07:13.220 { 00:07:13.220 "name": "BaseBdev1", 00:07:13.220 "uuid": "78e43447-3d8d-4128-8ffe-71ae355a3796", 00:07:13.220 "is_configured": true, 00:07:13.220 "data_offset": 0, 00:07:13.220 "data_size": 65536 00:07:13.220 }, 00:07:13.220 { 00:07:13.220 "name": "BaseBdev2", 00:07:13.220 "uuid": "93677f01-ed6b-4661-a34e-80d8e968ee24", 00:07:13.220 "is_configured": true, 00:07:13.220 "data_offset": 0, 00:07:13.220 "data_size": 65536 00:07:13.220 } 00:07:13.220 ] 00:07:13.220 }' 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.220 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.480 [2024-10-05 08:43:49.869343] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:13.480 "name": "Existed_Raid", 00:07:13.480 "aliases": [ 00:07:13.480 "702ac7b2-fe2f-4115-88ea-6fedfadc4393" 00:07:13.480 ], 00:07:13.480 "product_name": "Raid Volume", 00:07:13.480 "block_size": 512, 00:07:13.480 "num_blocks": 131072, 00:07:13.480 "uuid": "702ac7b2-fe2f-4115-88ea-6fedfadc4393", 00:07:13.480 "assigned_rate_limits": { 00:07:13.480 "rw_ios_per_sec": 0, 00:07:13.480 "rw_mbytes_per_sec": 0, 00:07:13.480 "r_mbytes_per_sec": 0, 00:07:13.480 "w_mbytes_per_sec": 0 00:07:13.480 }, 00:07:13.480 "claimed": false, 00:07:13.480 "zoned": false, 00:07:13.480 "supported_io_types": { 00:07:13.480 "read": true, 00:07:13.480 "write": true, 00:07:13.480 "unmap": true, 00:07:13.480 "flush": true, 00:07:13.480 "reset": true, 00:07:13.480 "nvme_admin": false, 00:07:13.480 "nvme_io": false, 00:07:13.480 "nvme_io_md": false, 00:07:13.480 "write_zeroes": true, 00:07:13.480 "zcopy": false, 00:07:13.480 "get_zone_info": false, 00:07:13.480 "zone_management": false, 00:07:13.480 "zone_append": false, 00:07:13.480 "compare": false, 00:07:13.480 "compare_and_write": false, 00:07:13.480 "abort": false, 00:07:13.480 "seek_hole": false, 00:07:13.480 "seek_data": false, 00:07:13.480 "copy": false, 00:07:13.480 "nvme_iov_md": false 00:07:13.480 }, 00:07:13.480 "memory_domains": [ 00:07:13.480 { 00:07:13.480 "dma_device_id": "system", 00:07:13.480 "dma_device_type": 1 00:07:13.480 }, 00:07:13.480 { 00:07:13.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.480 "dma_device_type": 2 00:07:13.480 }, 00:07:13.480 { 00:07:13.480 "dma_device_id": "system", 00:07:13.480 "dma_device_type": 1 00:07:13.480 }, 00:07:13.480 { 00:07:13.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.480 "dma_device_type": 2 00:07:13.480 } 00:07:13.480 ], 00:07:13.480 "driver_specific": { 00:07:13.480 "raid": { 00:07:13.480 "uuid": "702ac7b2-fe2f-4115-88ea-6fedfadc4393", 00:07:13.480 "strip_size_kb": 64, 00:07:13.480 "state": "online", 00:07:13.480 "raid_level": "raid0", 00:07:13.480 "superblock": false, 00:07:13.480 "num_base_bdevs": 2, 00:07:13.480 "num_base_bdevs_discovered": 2, 00:07:13.480 "num_base_bdevs_operational": 2, 00:07:13.480 "base_bdevs_list": [ 00:07:13.480 { 00:07:13.480 "name": "BaseBdev1", 00:07:13.480 "uuid": "78e43447-3d8d-4128-8ffe-71ae355a3796", 00:07:13.480 "is_configured": true, 00:07:13.480 "data_offset": 0, 00:07:13.480 "data_size": 65536 00:07:13.480 }, 00:07:13.480 { 00:07:13.480 "name": "BaseBdev2", 00:07:13.480 "uuid": "93677f01-ed6b-4661-a34e-80d8e968ee24", 00:07:13.480 "is_configured": true, 00:07:13.480 "data_offset": 0, 00:07:13.480 "data_size": 65536 00:07:13.480 } 00:07:13.480 ] 00:07:13.480 } 00:07:13.480 } 00:07:13.480 }' 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:13.480 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:13.481 BaseBdev2' 00:07:13.481 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.741 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:13.741 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.741 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:13.741 08:43:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.741 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.741 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.741 08:43:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.741 [2024-10-05 08:43:50.044966] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:13.741 [2024-10-05 08:43:50.044995] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:13.741 [2024-10-05 08:43:50.045041] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.741 "name": "Existed_Raid", 00:07:13.741 "uuid": "702ac7b2-fe2f-4115-88ea-6fedfadc4393", 00:07:13.741 "strip_size_kb": 64, 00:07:13.741 "state": "offline", 00:07:13.741 "raid_level": "raid0", 00:07:13.741 "superblock": false, 00:07:13.741 "num_base_bdevs": 2, 00:07:13.741 "num_base_bdevs_discovered": 1, 00:07:13.741 "num_base_bdevs_operational": 1, 00:07:13.741 "base_bdevs_list": [ 00:07:13.741 { 00:07:13.741 "name": null, 00:07:13.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.741 "is_configured": false, 00:07:13.741 "data_offset": 0, 00:07:13.741 "data_size": 65536 00:07:13.741 }, 00:07:13.741 { 00:07:13.741 "name": "BaseBdev2", 00:07:13.741 "uuid": "93677f01-ed6b-4661-a34e-80d8e968ee24", 00:07:13.741 "is_configured": true, 00:07:13.741 "data_offset": 0, 00:07:13.741 "data_size": 65536 00:07:13.741 } 00:07:13.741 ] 00:07:13.741 }' 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.741 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.310 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:14.310 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:14.310 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.310 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.310 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.310 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:14.310 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.310 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:14.310 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:14.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:14.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.311 [2024-10-05 08:43:50.642385] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:14.311 [2024-10-05 08:43:50.642456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:14.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:14.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:14.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.311 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:14.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.311 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60622 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60622 ']' 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60622 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60622 00:07:14.570 killing process with pid 60622 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60622' 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60622 00:07:14.570 [2024-10-05 08:43:50.836517] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.570 08:43:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60622 00:07:14.570 [2024-10-05 08:43:50.853315] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:15.951 00:07:15.951 real 0m4.955s 00:07:15.951 user 0m6.800s 00:07:15.951 sys 0m0.873s 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.951 ************************************ 00:07:15.951 END TEST raid_state_function_test 00:07:15.951 ************************************ 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.951 08:43:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:15.951 08:43:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:15.951 08:43:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.951 08:43:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.951 ************************************ 00:07:15.951 START TEST raid_state_function_test_sb 00:07:15.951 ************************************ 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:15.951 Process raid pid: 60840 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60840 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60840' 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60840 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 60840 ']' 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.951 08:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.951 [2024-10-05 08:43:52.342139] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:15.951 [2024-10-05 08:43:52.342240] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.211 [2024-10-05 08:43:52.505476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.472 [2024-10-05 08:43:52.753072] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.732 [2024-10-05 08:43:52.981072] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.732 [2024-10-05 08:43:52.981114] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.732 [2024-10-05 08:43:53.156707] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:16.732 [2024-10-05 08:43:53.156769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:16.732 [2024-10-05 08:43:53.156779] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.732 [2024-10-05 08:43:53.156791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:16.732 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.991 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.991 "name": "Existed_Raid", 00:07:16.991 "uuid": "aba4fe96-9f72-4c05-a0ae-8cff47a880f7", 00:07:16.991 "strip_size_kb": 64, 00:07:16.991 "state": "configuring", 00:07:16.991 "raid_level": "raid0", 00:07:16.991 "superblock": true, 00:07:16.991 "num_base_bdevs": 2, 00:07:16.991 "num_base_bdevs_discovered": 0, 00:07:16.991 "num_base_bdevs_operational": 2, 00:07:16.991 "base_bdevs_list": [ 00:07:16.991 { 00:07:16.991 "name": "BaseBdev1", 00:07:16.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.991 "is_configured": false, 00:07:16.991 "data_offset": 0, 00:07:16.991 "data_size": 0 00:07:16.991 }, 00:07:16.991 { 00:07:16.991 "name": "BaseBdev2", 00:07:16.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.991 "is_configured": false, 00:07:16.991 "data_offset": 0, 00:07:16.991 "data_size": 0 00:07:16.991 } 00:07:16.991 ] 00:07:16.991 }' 00:07:16.991 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.991 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.250 [2024-10-05 08:43:53.587828] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.250 [2024-10-05 08:43:53.587908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.250 [2024-10-05 08:43:53.595848] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.250 [2024-10-05 08:43:53.595920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.250 [2024-10-05 08:43:53.595962] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.250 [2024-10-05 08:43:53.595998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.250 [2024-10-05 08:43:53.675357] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.250 BaseBdev1 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.250 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.250 [ 00:07:17.250 { 00:07:17.250 "name": "BaseBdev1", 00:07:17.250 "aliases": [ 00:07:17.250 "9f6e08c4-ad2e-4c21-bcdd-188835d33372" 00:07:17.250 ], 00:07:17.250 "product_name": "Malloc disk", 00:07:17.251 "block_size": 512, 00:07:17.251 "num_blocks": 65536, 00:07:17.251 "uuid": "9f6e08c4-ad2e-4c21-bcdd-188835d33372", 00:07:17.251 "assigned_rate_limits": { 00:07:17.251 "rw_ios_per_sec": 0, 00:07:17.251 "rw_mbytes_per_sec": 0, 00:07:17.251 "r_mbytes_per_sec": 0, 00:07:17.251 "w_mbytes_per_sec": 0 00:07:17.251 }, 00:07:17.251 "claimed": true, 00:07:17.251 "claim_type": "exclusive_write", 00:07:17.251 "zoned": false, 00:07:17.251 "supported_io_types": { 00:07:17.251 "read": true, 00:07:17.251 "write": true, 00:07:17.251 "unmap": true, 00:07:17.251 "flush": true, 00:07:17.251 "reset": true, 00:07:17.251 "nvme_admin": false, 00:07:17.251 "nvme_io": false, 00:07:17.251 "nvme_io_md": false, 00:07:17.251 "write_zeroes": true, 00:07:17.251 "zcopy": true, 00:07:17.251 "get_zone_info": false, 00:07:17.251 "zone_management": false, 00:07:17.251 "zone_append": false, 00:07:17.251 "compare": false, 00:07:17.251 "compare_and_write": false, 00:07:17.251 "abort": true, 00:07:17.251 "seek_hole": false, 00:07:17.251 "seek_data": false, 00:07:17.251 "copy": true, 00:07:17.251 "nvme_iov_md": false 00:07:17.251 }, 00:07:17.251 "memory_domains": [ 00:07:17.251 { 00:07:17.251 "dma_device_id": "system", 00:07:17.251 "dma_device_type": 1 00:07:17.251 }, 00:07:17.251 { 00:07:17.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.251 "dma_device_type": 2 00:07:17.251 } 00:07:17.251 ], 00:07:17.251 "driver_specific": {} 00:07:17.251 } 00:07:17.251 ] 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.251 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.513 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.513 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.513 "name": "Existed_Raid", 00:07:17.513 "uuid": "a057a9e5-af2d-47a2-8bc2-5f89b32fa864", 00:07:17.513 "strip_size_kb": 64, 00:07:17.513 "state": "configuring", 00:07:17.513 "raid_level": "raid0", 00:07:17.513 "superblock": true, 00:07:17.513 "num_base_bdevs": 2, 00:07:17.513 "num_base_bdevs_discovered": 1, 00:07:17.513 "num_base_bdevs_operational": 2, 00:07:17.513 "base_bdevs_list": [ 00:07:17.513 { 00:07:17.513 "name": "BaseBdev1", 00:07:17.513 "uuid": "9f6e08c4-ad2e-4c21-bcdd-188835d33372", 00:07:17.513 "is_configured": true, 00:07:17.513 "data_offset": 2048, 00:07:17.513 "data_size": 63488 00:07:17.513 }, 00:07:17.513 { 00:07:17.513 "name": "BaseBdev2", 00:07:17.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.513 "is_configured": false, 00:07:17.513 "data_offset": 0, 00:07:17.513 "data_size": 0 00:07:17.513 } 00:07:17.513 ] 00:07:17.513 }' 00:07:17.513 08:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.513 08:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.773 [2024-10-05 08:43:54.130568] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.773 [2024-10-05 08:43:54.130611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.773 [2024-10-05 08:43:54.142594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.773 [2024-10-05 08:43:54.144584] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.773 [2024-10-05 08:43:54.144658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.773 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.773 "name": "Existed_Raid", 00:07:17.773 "uuid": "af804cb3-48ea-4c61-ac7a-9b138e723560", 00:07:17.773 "strip_size_kb": 64, 00:07:17.773 "state": "configuring", 00:07:17.773 "raid_level": "raid0", 00:07:17.774 "superblock": true, 00:07:17.774 "num_base_bdevs": 2, 00:07:17.774 "num_base_bdevs_discovered": 1, 00:07:17.774 "num_base_bdevs_operational": 2, 00:07:17.774 "base_bdevs_list": [ 00:07:17.774 { 00:07:17.774 "name": "BaseBdev1", 00:07:17.774 "uuid": "9f6e08c4-ad2e-4c21-bcdd-188835d33372", 00:07:17.774 "is_configured": true, 00:07:17.774 "data_offset": 2048, 00:07:17.774 "data_size": 63488 00:07:17.774 }, 00:07:17.774 { 00:07:17.774 "name": "BaseBdev2", 00:07:17.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.774 "is_configured": false, 00:07:17.774 "data_offset": 0, 00:07:17.774 "data_size": 0 00:07:17.774 } 00:07:17.774 ] 00:07:17.774 }' 00:07:17.774 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.774 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.344 [2024-10-05 08:43:54.592588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:18.344 [2024-10-05 08:43:54.592929] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:18.344 [2024-10-05 08:43:54.593002] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:18.344 [2024-10-05 08:43:54.593314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:18.344 BaseBdev2 00:07:18.344 [2024-10-05 08:43:54.593501] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:18.344 [2024-10-05 08:43:54.593531] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:18.344 [2024-10-05 08:43:54.593682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.344 [ 00:07:18.344 { 00:07:18.344 "name": "BaseBdev2", 00:07:18.344 "aliases": [ 00:07:18.344 "e25910c1-679a-4069-bea6-26141648ce9d" 00:07:18.344 ], 00:07:18.344 "product_name": "Malloc disk", 00:07:18.344 "block_size": 512, 00:07:18.344 "num_blocks": 65536, 00:07:18.344 "uuid": "e25910c1-679a-4069-bea6-26141648ce9d", 00:07:18.344 "assigned_rate_limits": { 00:07:18.344 "rw_ios_per_sec": 0, 00:07:18.344 "rw_mbytes_per_sec": 0, 00:07:18.344 "r_mbytes_per_sec": 0, 00:07:18.344 "w_mbytes_per_sec": 0 00:07:18.344 }, 00:07:18.344 "claimed": true, 00:07:18.344 "claim_type": "exclusive_write", 00:07:18.344 "zoned": false, 00:07:18.344 "supported_io_types": { 00:07:18.344 "read": true, 00:07:18.344 "write": true, 00:07:18.344 "unmap": true, 00:07:18.344 "flush": true, 00:07:18.344 "reset": true, 00:07:18.344 "nvme_admin": false, 00:07:18.344 "nvme_io": false, 00:07:18.344 "nvme_io_md": false, 00:07:18.344 "write_zeroes": true, 00:07:18.344 "zcopy": true, 00:07:18.344 "get_zone_info": false, 00:07:18.344 "zone_management": false, 00:07:18.344 "zone_append": false, 00:07:18.344 "compare": false, 00:07:18.344 "compare_and_write": false, 00:07:18.344 "abort": true, 00:07:18.344 "seek_hole": false, 00:07:18.344 "seek_data": false, 00:07:18.344 "copy": true, 00:07:18.344 "nvme_iov_md": false 00:07:18.344 }, 00:07:18.344 "memory_domains": [ 00:07:18.344 { 00:07:18.344 "dma_device_id": "system", 00:07:18.344 "dma_device_type": 1 00:07:18.344 }, 00:07:18.344 { 00:07:18.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.344 "dma_device_type": 2 00:07:18.344 } 00:07:18.344 ], 00:07:18.344 "driver_specific": {} 00:07:18.344 } 00:07:18.344 ] 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:18.344 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.345 "name": "Existed_Raid", 00:07:18.345 "uuid": "af804cb3-48ea-4c61-ac7a-9b138e723560", 00:07:18.345 "strip_size_kb": 64, 00:07:18.345 "state": "online", 00:07:18.345 "raid_level": "raid0", 00:07:18.345 "superblock": true, 00:07:18.345 "num_base_bdevs": 2, 00:07:18.345 "num_base_bdevs_discovered": 2, 00:07:18.345 "num_base_bdevs_operational": 2, 00:07:18.345 "base_bdevs_list": [ 00:07:18.345 { 00:07:18.345 "name": "BaseBdev1", 00:07:18.345 "uuid": "9f6e08c4-ad2e-4c21-bcdd-188835d33372", 00:07:18.345 "is_configured": true, 00:07:18.345 "data_offset": 2048, 00:07:18.345 "data_size": 63488 00:07:18.345 }, 00:07:18.345 { 00:07:18.345 "name": "BaseBdev2", 00:07:18.345 "uuid": "e25910c1-679a-4069-bea6-26141648ce9d", 00:07:18.345 "is_configured": true, 00:07:18.345 "data_offset": 2048, 00:07:18.345 "data_size": 63488 00:07:18.345 } 00:07:18.345 ] 00:07:18.345 }' 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.345 08:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.605 [2024-10-05 08:43:55.032163] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:18.605 "name": "Existed_Raid", 00:07:18.605 "aliases": [ 00:07:18.605 "af804cb3-48ea-4c61-ac7a-9b138e723560" 00:07:18.605 ], 00:07:18.605 "product_name": "Raid Volume", 00:07:18.605 "block_size": 512, 00:07:18.605 "num_blocks": 126976, 00:07:18.605 "uuid": "af804cb3-48ea-4c61-ac7a-9b138e723560", 00:07:18.605 "assigned_rate_limits": { 00:07:18.605 "rw_ios_per_sec": 0, 00:07:18.605 "rw_mbytes_per_sec": 0, 00:07:18.605 "r_mbytes_per_sec": 0, 00:07:18.605 "w_mbytes_per_sec": 0 00:07:18.605 }, 00:07:18.605 "claimed": false, 00:07:18.605 "zoned": false, 00:07:18.605 "supported_io_types": { 00:07:18.605 "read": true, 00:07:18.605 "write": true, 00:07:18.605 "unmap": true, 00:07:18.605 "flush": true, 00:07:18.605 "reset": true, 00:07:18.605 "nvme_admin": false, 00:07:18.605 "nvme_io": false, 00:07:18.605 "nvme_io_md": false, 00:07:18.605 "write_zeroes": true, 00:07:18.605 "zcopy": false, 00:07:18.605 "get_zone_info": false, 00:07:18.605 "zone_management": false, 00:07:18.605 "zone_append": false, 00:07:18.605 "compare": false, 00:07:18.605 "compare_and_write": false, 00:07:18.605 "abort": false, 00:07:18.605 "seek_hole": false, 00:07:18.605 "seek_data": false, 00:07:18.605 "copy": false, 00:07:18.605 "nvme_iov_md": false 00:07:18.605 }, 00:07:18.605 "memory_domains": [ 00:07:18.605 { 00:07:18.605 "dma_device_id": "system", 00:07:18.605 "dma_device_type": 1 00:07:18.605 }, 00:07:18.605 { 00:07:18.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.605 "dma_device_type": 2 00:07:18.605 }, 00:07:18.605 { 00:07:18.605 "dma_device_id": "system", 00:07:18.605 "dma_device_type": 1 00:07:18.605 }, 00:07:18.605 { 00:07:18.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.605 "dma_device_type": 2 00:07:18.605 } 00:07:18.605 ], 00:07:18.605 "driver_specific": { 00:07:18.605 "raid": { 00:07:18.605 "uuid": "af804cb3-48ea-4c61-ac7a-9b138e723560", 00:07:18.605 "strip_size_kb": 64, 00:07:18.605 "state": "online", 00:07:18.605 "raid_level": "raid0", 00:07:18.605 "superblock": true, 00:07:18.605 "num_base_bdevs": 2, 00:07:18.605 "num_base_bdevs_discovered": 2, 00:07:18.605 "num_base_bdevs_operational": 2, 00:07:18.605 "base_bdevs_list": [ 00:07:18.605 { 00:07:18.605 "name": "BaseBdev1", 00:07:18.605 "uuid": "9f6e08c4-ad2e-4c21-bcdd-188835d33372", 00:07:18.605 "is_configured": true, 00:07:18.605 "data_offset": 2048, 00:07:18.605 "data_size": 63488 00:07:18.605 }, 00:07:18.605 { 00:07:18.605 "name": "BaseBdev2", 00:07:18.605 "uuid": "e25910c1-679a-4069-bea6-26141648ce9d", 00:07:18.605 "is_configured": true, 00:07:18.605 "data_offset": 2048, 00:07:18.605 "data_size": 63488 00:07:18.605 } 00:07:18.605 ] 00:07:18.605 } 00:07:18.605 } 00:07:18.605 }' 00:07:18.605 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:18.865 BaseBdev2' 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.865 [2024-10-05 08:43:55.223638] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:18.865 [2024-10-05 08:43:55.223709] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.865 [2024-10-05 08:43:55.223776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.865 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.124 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.124 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.124 "name": "Existed_Raid", 00:07:19.124 "uuid": "af804cb3-48ea-4c61-ac7a-9b138e723560", 00:07:19.124 "strip_size_kb": 64, 00:07:19.124 "state": "offline", 00:07:19.124 "raid_level": "raid0", 00:07:19.124 "superblock": true, 00:07:19.124 "num_base_bdevs": 2, 00:07:19.124 "num_base_bdevs_discovered": 1, 00:07:19.124 "num_base_bdevs_operational": 1, 00:07:19.124 "base_bdevs_list": [ 00:07:19.124 { 00:07:19.124 "name": null, 00:07:19.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.124 "is_configured": false, 00:07:19.124 "data_offset": 0, 00:07:19.124 "data_size": 63488 00:07:19.124 }, 00:07:19.124 { 00:07:19.124 "name": "BaseBdev2", 00:07:19.124 "uuid": "e25910c1-679a-4069-bea6-26141648ce9d", 00:07:19.124 "is_configured": true, 00:07:19.124 "data_offset": 2048, 00:07:19.124 "data_size": 63488 00:07:19.124 } 00:07:19.124 ] 00:07:19.124 }' 00:07:19.124 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.124 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.384 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:19.384 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.384 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:19.384 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.384 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.384 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.384 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.384 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:19.384 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:19.384 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:19.384 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.384 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.384 [2024-10-05 08:43:55.774124] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:19.384 [2024-10-05 08:43:55.774243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60840 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 60840 ']' 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 60840 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60840 00:07:19.644 killing process with pid 60840 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60840' 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 60840 00:07:19.644 [2024-10-05 08:43:55.953873] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.644 08:43:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 60840 00:07:19.644 [2024-10-05 08:43:55.970389] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.023 08:43:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:21.023 00:07:21.023 real 0m5.047s 00:07:21.023 user 0m6.948s 00:07:21.023 sys 0m0.884s 00:07:21.023 08:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.023 08:43:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.023 ************************************ 00:07:21.023 END TEST raid_state_function_test_sb 00:07:21.023 ************************************ 00:07:21.023 08:43:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:21.023 08:43:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:21.023 08:43:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.023 08:43:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.023 ************************************ 00:07:21.023 START TEST raid_superblock_test 00:07:21.023 ************************************ 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:21.023 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:21.024 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:21.024 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:21.024 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61062 00:07:21.024 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61062 00:07:21.024 08:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:21.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.024 08:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61062 ']' 00:07:21.024 08:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.024 08:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.024 08:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.024 08:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.024 08:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.024 [2024-10-05 08:43:57.447636] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:21.024 [2024-10-05 08:43:57.447750] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61062 ] 00:07:21.283 [2024-10-05 08:43:57.610199] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.543 [2024-10-05 08:43:57.850402] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.804 [2024-10-05 08:43:58.080927] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.804 [2024-10-05 08:43:58.080976] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.804 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.804 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:21.804 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:21.804 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:21.804 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:21.804 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:21.804 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:21.804 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:21.804 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:21.804 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.065 malloc1 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.065 [2024-10-05 08:43:58.328844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:22.065 [2024-10-05 08:43:58.328925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.065 [2024-10-05 08:43:58.328967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:22.065 [2024-10-05 08:43:58.328995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.065 [2024-10-05 08:43:58.331296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.065 [2024-10-05 08:43:58.331332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:22.065 pt1 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:22.065 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.066 malloc2 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.066 [2024-10-05 08:43:58.418980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:22.066 [2024-10-05 08:43:58.419108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.066 [2024-10-05 08:43:58.419150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:22.066 [2024-10-05 08:43:58.419177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.066 [2024-10-05 08:43:58.421477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.066 [2024-10-05 08:43:58.421562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:22.066 pt2 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.066 [2024-10-05 08:43:58.431028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:22.066 [2024-10-05 08:43:58.433049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:22.066 [2024-10-05 08:43:58.433263] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:22.066 [2024-10-05 08:43:58.433313] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:22.066 [2024-10-05 08:43:58.433562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.066 [2024-10-05 08:43:58.433747] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:22.066 [2024-10-05 08:43:58.433787] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:22.066 [2024-10-05 08:43:58.433980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.066 "name": "raid_bdev1", 00:07:22.066 "uuid": "a13e98b8-cec8-4f7f-a9b2-26ca59f1e104", 00:07:22.066 "strip_size_kb": 64, 00:07:22.066 "state": "online", 00:07:22.066 "raid_level": "raid0", 00:07:22.066 "superblock": true, 00:07:22.066 "num_base_bdevs": 2, 00:07:22.066 "num_base_bdevs_discovered": 2, 00:07:22.066 "num_base_bdevs_operational": 2, 00:07:22.066 "base_bdevs_list": [ 00:07:22.066 { 00:07:22.066 "name": "pt1", 00:07:22.066 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.066 "is_configured": true, 00:07:22.066 "data_offset": 2048, 00:07:22.066 "data_size": 63488 00:07:22.066 }, 00:07:22.066 { 00:07:22.066 "name": "pt2", 00:07:22.066 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.066 "is_configured": true, 00:07:22.066 "data_offset": 2048, 00:07:22.066 "data_size": 63488 00:07:22.066 } 00:07:22.066 ] 00:07:22.066 }' 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.066 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:22.636 [2024-10-05 08:43:58.822496] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:22.636 "name": "raid_bdev1", 00:07:22.636 "aliases": [ 00:07:22.636 "a13e98b8-cec8-4f7f-a9b2-26ca59f1e104" 00:07:22.636 ], 00:07:22.636 "product_name": "Raid Volume", 00:07:22.636 "block_size": 512, 00:07:22.636 "num_blocks": 126976, 00:07:22.636 "uuid": "a13e98b8-cec8-4f7f-a9b2-26ca59f1e104", 00:07:22.636 "assigned_rate_limits": { 00:07:22.636 "rw_ios_per_sec": 0, 00:07:22.636 "rw_mbytes_per_sec": 0, 00:07:22.636 "r_mbytes_per_sec": 0, 00:07:22.636 "w_mbytes_per_sec": 0 00:07:22.636 }, 00:07:22.636 "claimed": false, 00:07:22.636 "zoned": false, 00:07:22.636 "supported_io_types": { 00:07:22.636 "read": true, 00:07:22.636 "write": true, 00:07:22.636 "unmap": true, 00:07:22.636 "flush": true, 00:07:22.636 "reset": true, 00:07:22.636 "nvme_admin": false, 00:07:22.636 "nvme_io": false, 00:07:22.636 "nvme_io_md": false, 00:07:22.636 "write_zeroes": true, 00:07:22.636 "zcopy": false, 00:07:22.636 "get_zone_info": false, 00:07:22.636 "zone_management": false, 00:07:22.636 "zone_append": false, 00:07:22.636 "compare": false, 00:07:22.636 "compare_and_write": false, 00:07:22.636 "abort": false, 00:07:22.636 "seek_hole": false, 00:07:22.636 "seek_data": false, 00:07:22.636 "copy": false, 00:07:22.636 "nvme_iov_md": false 00:07:22.636 }, 00:07:22.636 "memory_domains": [ 00:07:22.636 { 00:07:22.636 "dma_device_id": "system", 00:07:22.636 "dma_device_type": 1 00:07:22.636 }, 00:07:22.636 { 00:07:22.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.636 "dma_device_type": 2 00:07:22.636 }, 00:07:22.636 { 00:07:22.636 "dma_device_id": "system", 00:07:22.636 "dma_device_type": 1 00:07:22.636 }, 00:07:22.636 { 00:07:22.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.636 "dma_device_type": 2 00:07:22.636 } 00:07:22.636 ], 00:07:22.636 "driver_specific": { 00:07:22.636 "raid": { 00:07:22.636 "uuid": "a13e98b8-cec8-4f7f-a9b2-26ca59f1e104", 00:07:22.636 "strip_size_kb": 64, 00:07:22.636 "state": "online", 00:07:22.636 "raid_level": "raid0", 00:07:22.636 "superblock": true, 00:07:22.636 "num_base_bdevs": 2, 00:07:22.636 "num_base_bdevs_discovered": 2, 00:07:22.636 "num_base_bdevs_operational": 2, 00:07:22.636 "base_bdevs_list": [ 00:07:22.636 { 00:07:22.636 "name": "pt1", 00:07:22.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.636 "is_configured": true, 00:07:22.636 "data_offset": 2048, 00:07:22.636 "data_size": 63488 00:07:22.636 }, 00:07:22.636 { 00:07:22.636 "name": "pt2", 00:07:22.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.636 "is_configured": true, 00:07:22.636 "data_offset": 2048, 00:07:22.636 "data_size": 63488 00:07:22.636 } 00:07:22.636 ] 00:07:22.636 } 00:07:22.636 } 00:07:22.636 }' 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:22.636 pt2' 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.636 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:22.637 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.637 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.637 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.637 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.637 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.637 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.637 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:22.637 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:22.637 08:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:22.637 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.637 08:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.637 [2024-10-05 08:43:59.034160] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a13e98b8-cec8-4f7f-a9b2-26ca59f1e104 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a13e98b8-cec8-4f7f-a9b2-26ca59f1e104 ']' 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.637 [2024-10-05 08:43:59.081837] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:22.637 [2024-10-05 08:43:59.081903] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.637 [2024-10-05 08:43:59.081992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.637 [2024-10-05 08:43:59.082032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.637 [2024-10-05 08:43:59.082045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:22.637 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.898 [2024-10-05 08:43:59.217606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:22.898 [2024-10-05 08:43:59.219707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:22.898 [2024-10-05 08:43:59.219809] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:22.898 [2024-10-05 08:43:59.219878] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:22.898 [2024-10-05 08:43:59.219893] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:22.898 [2024-10-05 08:43:59.219902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:22.898 request: 00:07:22.898 { 00:07:22.898 "name": "raid_bdev1", 00:07:22.898 "raid_level": "raid0", 00:07:22.898 "base_bdevs": [ 00:07:22.898 "malloc1", 00:07:22.898 "malloc2" 00:07:22.898 ], 00:07:22.898 "strip_size_kb": 64, 00:07:22.898 "superblock": false, 00:07:22.898 "method": "bdev_raid_create", 00:07:22.898 "req_id": 1 00:07:22.898 } 00:07:22.898 Got JSON-RPC error response 00:07:22.898 response: 00:07:22.898 { 00:07:22.898 "code": -17, 00:07:22.898 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:22.898 } 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.898 [2024-10-05 08:43:59.281473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:22.898 [2024-10-05 08:43:59.281521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:22.898 [2024-10-05 08:43:59.281539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:22.898 [2024-10-05 08:43:59.281550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:22.898 [2024-10-05 08:43:59.283830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:22.898 [2024-10-05 08:43:59.283865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:22.898 [2024-10-05 08:43:59.283927] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:22.898 [2024-10-05 08:43:59.283989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:22.898 pt1 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.898 "name": "raid_bdev1", 00:07:22.898 "uuid": "a13e98b8-cec8-4f7f-a9b2-26ca59f1e104", 00:07:22.898 "strip_size_kb": 64, 00:07:22.898 "state": "configuring", 00:07:22.898 "raid_level": "raid0", 00:07:22.898 "superblock": true, 00:07:22.898 "num_base_bdevs": 2, 00:07:22.898 "num_base_bdevs_discovered": 1, 00:07:22.898 "num_base_bdevs_operational": 2, 00:07:22.898 "base_bdevs_list": [ 00:07:22.898 { 00:07:22.898 "name": "pt1", 00:07:22.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:22.898 "is_configured": true, 00:07:22.898 "data_offset": 2048, 00:07:22.898 "data_size": 63488 00:07:22.898 }, 00:07:22.898 { 00:07:22.898 "name": null, 00:07:22.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:22.898 "is_configured": false, 00:07:22.898 "data_offset": 2048, 00:07:22.898 "data_size": 63488 00:07:22.898 } 00:07:22.898 ] 00:07:22.898 }' 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.898 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.469 [2024-10-05 08:43:59.660799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:23.469 [2024-10-05 08:43:59.660899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.469 [2024-10-05 08:43:59.660933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:23.469 [2024-10-05 08:43:59.660968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.469 [2024-10-05 08:43:59.661359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.469 [2024-10-05 08:43:59.661413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:23.469 [2024-10-05 08:43:59.661486] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:23.469 [2024-10-05 08:43:59.661531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:23.469 [2024-10-05 08:43:59.661639] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:23.469 [2024-10-05 08:43:59.661675] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.469 [2024-10-05 08:43:59.661924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:23.469 [2024-10-05 08:43:59.662134] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:23.469 [2024-10-05 08:43:59.662175] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:23.469 [2024-10-05 08:43:59.662335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.469 pt2 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.469 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.470 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.470 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.470 "name": "raid_bdev1", 00:07:23.470 "uuid": "a13e98b8-cec8-4f7f-a9b2-26ca59f1e104", 00:07:23.470 "strip_size_kb": 64, 00:07:23.470 "state": "online", 00:07:23.470 "raid_level": "raid0", 00:07:23.470 "superblock": true, 00:07:23.470 "num_base_bdevs": 2, 00:07:23.470 "num_base_bdevs_discovered": 2, 00:07:23.470 "num_base_bdevs_operational": 2, 00:07:23.470 "base_bdevs_list": [ 00:07:23.470 { 00:07:23.470 "name": "pt1", 00:07:23.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:23.470 "is_configured": true, 00:07:23.470 "data_offset": 2048, 00:07:23.470 "data_size": 63488 00:07:23.470 }, 00:07:23.470 { 00:07:23.470 "name": "pt2", 00:07:23.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:23.470 "is_configured": true, 00:07:23.470 "data_offset": 2048, 00:07:23.470 "data_size": 63488 00:07:23.470 } 00:07:23.470 ] 00:07:23.470 }' 00:07:23.470 08:43:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.470 08:43:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.729 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:23.729 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:23.729 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:23.729 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:23.729 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:23.729 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:23.730 [2024-10-05 08:44:00.020375] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:23.730 "name": "raid_bdev1", 00:07:23.730 "aliases": [ 00:07:23.730 "a13e98b8-cec8-4f7f-a9b2-26ca59f1e104" 00:07:23.730 ], 00:07:23.730 "product_name": "Raid Volume", 00:07:23.730 "block_size": 512, 00:07:23.730 "num_blocks": 126976, 00:07:23.730 "uuid": "a13e98b8-cec8-4f7f-a9b2-26ca59f1e104", 00:07:23.730 "assigned_rate_limits": { 00:07:23.730 "rw_ios_per_sec": 0, 00:07:23.730 "rw_mbytes_per_sec": 0, 00:07:23.730 "r_mbytes_per_sec": 0, 00:07:23.730 "w_mbytes_per_sec": 0 00:07:23.730 }, 00:07:23.730 "claimed": false, 00:07:23.730 "zoned": false, 00:07:23.730 "supported_io_types": { 00:07:23.730 "read": true, 00:07:23.730 "write": true, 00:07:23.730 "unmap": true, 00:07:23.730 "flush": true, 00:07:23.730 "reset": true, 00:07:23.730 "nvme_admin": false, 00:07:23.730 "nvme_io": false, 00:07:23.730 "nvme_io_md": false, 00:07:23.730 "write_zeroes": true, 00:07:23.730 "zcopy": false, 00:07:23.730 "get_zone_info": false, 00:07:23.730 "zone_management": false, 00:07:23.730 "zone_append": false, 00:07:23.730 "compare": false, 00:07:23.730 "compare_and_write": false, 00:07:23.730 "abort": false, 00:07:23.730 "seek_hole": false, 00:07:23.730 "seek_data": false, 00:07:23.730 "copy": false, 00:07:23.730 "nvme_iov_md": false 00:07:23.730 }, 00:07:23.730 "memory_domains": [ 00:07:23.730 { 00:07:23.730 "dma_device_id": "system", 00:07:23.730 "dma_device_type": 1 00:07:23.730 }, 00:07:23.730 { 00:07:23.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.730 "dma_device_type": 2 00:07:23.730 }, 00:07:23.730 { 00:07:23.730 "dma_device_id": "system", 00:07:23.730 "dma_device_type": 1 00:07:23.730 }, 00:07:23.730 { 00:07:23.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.730 "dma_device_type": 2 00:07:23.730 } 00:07:23.730 ], 00:07:23.730 "driver_specific": { 00:07:23.730 "raid": { 00:07:23.730 "uuid": "a13e98b8-cec8-4f7f-a9b2-26ca59f1e104", 00:07:23.730 "strip_size_kb": 64, 00:07:23.730 "state": "online", 00:07:23.730 "raid_level": "raid0", 00:07:23.730 "superblock": true, 00:07:23.730 "num_base_bdevs": 2, 00:07:23.730 "num_base_bdevs_discovered": 2, 00:07:23.730 "num_base_bdevs_operational": 2, 00:07:23.730 "base_bdevs_list": [ 00:07:23.730 { 00:07:23.730 "name": "pt1", 00:07:23.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:23.730 "is_configured": true, 00:07:23.730 "data_offset": 2048, 00:07:23.730 "data_size": 63488 00:07:23.730 }, 00:07:23.730 { 00:07:23.730 "name": "pt2", 00:07:23.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:23.730 "is_configured": true, 00:07:23.730 "data_offset": 2048, 00:07:23.730 "data_size": 63488 00:07:23.730 } 00:07:23.730 ] 00:07:23.730 } 00:07:23.730 } 00:07:23.730 }' 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:23.730 pt2' 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.730 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.990 [2024-10-05 08:44:00.204112] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a13e98b8-cec8-4f7f-a9b2-26ca59f1e104 '!=' a13e98b8-cec8-4f7f-a9b2-26ca59f1e104 ']' 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61062 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61062 ']' 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61062 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61062 00:07:23.990 killing process with pid 61062 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61062' 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61062 00:07:23.990 [2024-10-05 08:44:00.273524] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.990 [2024-10-05 08:44:00.273589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.990 [2024-10-05 08:44:00.273627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.990 [2024-10-05 08:44:00.273638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:23.990 08:44:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61062 00:07:24.250 [2024-10-05 08:44:00.488544] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.633 08:44:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:25.633 00:07:25.633 real 0m4.452s 00:07:25.633 user 0m5.874s 00:07:25.633 sys 0m0.798s 00:07:25.633 08:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.633 08:44:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.633 ************************************ 00:07:25.633 END TEST raid_superblock_test 00:07:25.633 ************************************ 00:07:25.633 08:44:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:25.633 08:44:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:25.633 08:44:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.633 08:44:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.633 ************************************ 00:07:25.633 START TEST raid_read_error_test 00:07:25.633 ************************************ 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NMLCm9x9k9 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61248 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61248 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61248 ']' 00:07:25.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.633 08:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.633 [2024-10-05 08:44:01.979986] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:25.633 [2024-10-05 08:44:01.980095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61248 ] 00:07:25.893 [2024-10-05 08:44:02.134689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.154 [2024-10-05 08:44:02.380130] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.154 [2024-10-05 08:44:02.601054] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.154 [2024-10-05 08:44:02.601227] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.414 BaseBdev1_malloc 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.414 true 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.414 [2024-10-05 08:44:02.877790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:26.414 [2024-10-05 08:44:02.877858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.414 [2024-10-05 08:44:02.877892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:26.414 [2024-10-05 08:44:02.877904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.414 [2024-10-05 08:44:02.880333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.414 [2024-10-05 08:44:02.880372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:26.414 BaseBdev1 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.414 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.674 BaseBdev2_malloc 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.674 true 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.674 [2024-10-05 08:44:02.961452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:26.674 [2024-10-05 08:44:02.961508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.674 [2024-10-05 08:44:02.961524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:26.674 [2024-10-05 08:44:02.961536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.674 [2024-10-05 08:44:02.963881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.674 [2024-10-05 08:44:02.963923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:26.674 BaseBdev2 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.674 [2024-10-05 08:44:02.973510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.674 [2024-10-05 08:44:02.975539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:26.674 [2024-10-05 08:44:02.975729] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:26.674 [2024-10-05 08:44:02.975743] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.674 [2024-10-05 08:44:02.975978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:26.674 [2024-10-05 08:44:02.976159] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:26.674 [2024-10-05 08:44:02.976175] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:26.674 [2024-10-05 08:44:02.976323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.674 08:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.674 08:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.674 08:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.674 "name": "raid_bdev1", 00:07:26.674 "uuid": "166e5d97-33c8-4fc8-9dc9-78185d297262", 00:07:26.674 "strip_size_kb": 64, 00:07:26.674 "state": "online", 00:07:26.674 "raid_level": "raid0", 00:07:26.674 "superblock": true, 00:07:26.674 "num_base_bdevs": 2, 00:07:26.674 "num_base_bdevs_discovered": 2, 00:07:26.674 "num_base_bdevs_operational": 2, 00:07:26.674 "base_bdevs_list": [ 00:07:26.674 { 00:07:26.674 "name": "BaseBdev1", 00:07:26.674 "uuid": "2c7a3ea0-7e2e-53af-8a73-2a0451dd1c41", 00:07:26.674 "is_configured": true, 00:07:26.674 "data_offset": 2048, 00:07:26.674 "data_size": 63488 00:07:26.674 }, 00:07:26.674 { 00:07:26.674 "name": "BaseBdev2", 00:07:26.674 "uuid": "f2d8b6fc-5204-529c-8f27-2e1135452345", 00:07:26.674 "is_configured": true, 00:07:26.674 "data_offset": 2048, 00:07:26.674 "data_size": 63488 00:07:26.674 } 00:07:26.674 ] 00:07:26.674 }' 00:07:26.674 08:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.675 08:44:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.244 08:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:27.244 08:44:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:27.244 [2024-10-05 08:44:03.478240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.248 "name": "raid_bdev1", 00:07:28.248 "uuid": "166e5d97-33c8-4fc8-9dc9-78185d297262", 00:07:28.248 "strip_size_kb": 64, 00:07:28.248 "state": "online", 00:07:28.248 "raid_level": "raid0", 00:07:28.248 "superblock": true, 00:07:28.248 "num_base_bdevs": 2, 00:07:28.248 "num_base_bdevs_discovered": 2, 00:07:28.248 "num_base_bdevs_operational": 2, 00:07:28.248 "base_bdevs_list": [ 00:07:28.248 { 00:07:28.248 "name": "BaseBdev1", 00:07:28.248 "uuid": "2c7a3ea0-7e2e-53af-8a73-2a0451dd1c41", 00:07:28.248 "is_configured": true, 00:07:28.248 "data_offset": 2048, 00:07:28.248 "data_size": 63488 00:07:28.248 }, 00:07:28.248 { 00:07:28.248 "name": "BaseBdev2", 00:07:28.248 "uuid": "f2d8b6fc-5204-529c-8f27-2e1135452345", 00:07:28.248 "is_configured": true, 00:07:28.248 "data_offset": 2048, 00:07:28.248 "data_size": 63488 00:07:28.248 } 00:07:28.248 ] 00:07:28.248 }' 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.248 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.509 [2024-10-05 08:44:04.814388] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:28.509 [2024-10-05 08:44:04.814517] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.509 [2024-10-05 08:44:04.817146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.509 [2024-10-05 08:44:04.817196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.509 [2024-10-05 08:44:04.817231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.509 [2024-10-05 08:44:04.817243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:28.509 { 00:07:28.509 "results": [ 00:07:28.509 { 00:07:28.509 "job": "raid_bdev1", 00:07:28.509 "core_mask": "0x1", 00:07:28.509 "workload": "randrw", 00:07:28.509 "percentage": 50, 00:07:28.509 "status": "finished", 00:07:28.509 "queue_depth": 1, 00:07:28.509 "io_size": 131072, 00:07:28.509 "runtime": 1.336662, 00:07:28.509 "iops": 14992.571046382705, 00:07:28.509 "mibps": 1874.0713807978382, 00:07:28.509 "io_failed": 1, 00:07:28.509 "io_timeout": 0, 00:07:28.509 "avg_latency_us": 93.76146863994313, 00:07:28.509 "min_latency_us": 25.2646288209607, 00:07:28.509 "max_latency_us": 1387.989519650655 00:07:28.509 } 00:07:28.509 ], 00:07:28.509 "core_count": 1 00:07:28.509 } 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61248 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61248 ']' 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61248 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61248 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61248' 00:07:28.509 killing process with pid 61248 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61248 00:07:28.509 08:44:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61248 00:07:28.509 [2024-10-05 08:44:04.856738] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.768 [2024-10-05 08:44:05.010010] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.150 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NMLCm9x9k9 00:07:30.150 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:30.150 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:30.150 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:30.150 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:30.150 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.150 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.150 08:44:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:30.150 00:07:30.150 real 0m4.535s 00:07:30.150 user 0m5.195s 00:07:30.150 sys 0m0.624s 00:07:30.150 ************************************ 00:07:30.150 END TEST raid_read_error_test 00:07:30.150 ************************************ 00:07:30.150 08:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.150 08:44:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.150 08:44:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:30.150 08:44:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:30.150 08:44:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.150 08:44:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.150 ************************************ 00:07:30.150 START TEST raid_write_error_test 00:07:30.150 ************************************ 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iiU9X5cSb7 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61365 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61365 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61365 ']' 00:07:30.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.150 08:44:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.150 [2024-10-05 08:44:06.585620] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:30.150 [2024-10-05 08:44:06.585810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61365 ] 00:07:30.410 [2024-10-05 08:44:06.734376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.669 [2024-10-05 08:44:06.984616] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.928 [2024-10-05 08:44:07.220690] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.928 [2024-10-05 08:44:07.220727] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.928 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.928 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:30.928 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:30.928 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:30.928 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.928 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.188 BaseBdev1_malloc 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.188 true 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.188 [2024-10-05 08:44:07.462717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:31.188 [2024-10-05 08:44:07.462790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.188 [2024-10-05 08:44:07.462810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:31.188 [2024-10-05 08:44:07.462822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.188 [2024-10-05 08:44:07.465204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.188 [2024-10-05 08:44:07.465330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:31.188 BaseBdev1 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.188 BaseBdev2_malloc 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.188 true 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.188 [2024-10-05 08:44:07.549447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:31.188 [2024-10-05 08:44:07.549518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.188 [2024-10-05 08:44:07.549537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:31.188 [2024-10-05 08:44:07.549549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.188 [2024-10-05 08:44:07.551921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.188 [2024-10-05 08:44:07.551974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:31.188 BaseBdev2 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.188 [2024-10-05 08:44:07.561504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.188 [2024-10-05 08:44:07.563618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:31.188 [2024-10-05 08:44:07.563815] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:31.188 [2024-10-05 08:44:07.563830] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.188 [2024-10-05 08:44:07.564080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:31.188 [2024-10-05 08:44:07.564235] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:31.188 [2024-10-05 08:44:07.564245] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:31.188 [2024-10-05 08:44:07.564401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.188 "name": "raid_bdev1", 00:07:31.188 "uuid": "46bae879-f3f8-4812-997d-15719791f9ef", 00:07:31.188 "strip_size_kb": 64, 00:07:31.188 "state": "online", 00:07:31.188 "raid_level": "raid0", 00:07:31.188 "superblock": true, 00:07:31.188 "num_base_bdevs": 2, 00:07:31.188 "num_base_bdevs_discovered": 2, 00:07:31.188 "num_base_bdevs_operational": 2, 00:07:31.188 "base_bdevs_list": [ 00:07:31.188 { 00:07:31.188 "name": "BaseBdev1", 00:07:31.188 "uuid": "c94a1f0a-46f7-540e-9d15-30bc94c60b13", 00:07:31.188 "is_configured": true, 00:07:31.188 "data_offset": 2048, 00:07:31.188 "data_size": 63488 00:07:31.188 }, 00:07:31.188 { 00:07:31.188 "name": "BaseBdev2", 00:07:31.188 "uuid": "1c7a6980-7e07-530c-b199-1e2a3d19ae0c", 00:07:31.188 "is_configured": true, 00:07:31.188 "data_offset": 2048, 00:07:31.188 "data_size": 63488 00:07:31.188 } 00:07:31.188 ] 00:07:31.188 }' 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.188 08:44:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.758 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:31.758 08:44:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:31.758 [2024-10-05 08:44:08.094143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.699 "name": "raid_bdev1", 00:07:32.699 "uuid": "46bae879-f3f8-4812-997d-15719791f9ef", 00:07:32.699 "strip_size_kb": 64, 00:07:32.699 "state": "online", 00:07:32.699 "raid_level": "raid0", 00:07:32.699 "superblock": true, 00:07:32.699 "num_base_bdevs": 2, 00:07:32.699 "num_base_bdevs_discovered": 2, 00:07:32.699 "num_base_bdevs_operational": 2, 00:07:32.699 "base_bdevs_list": [ 00:07:32.699 { 00:07:32.699 "name": "BaseBdev1", 00:07:32.699 "uuid": "c94a1f0a-46f7-540e-9d15-30bc94c60b13", 00:07:32.699 "is_configured": true, 00:07:32.699 "data_offset": 2048, 00:07:32.699 "data_size": 63488 00:07:32.699 }, 00:07:32.699 { 00:07:32.699 "name": "BaseBdev2", 00:07:32.699 "uuid": "1c7a6980-7e07-530c-b199-1e2a3d19ae0c", 00:07:32.699 "is_configured": true, 00:07:32.699 "data_offset": 2048, 00:07:32.699 "data_size": 63488 00:07:32.699 } 00:07:32.699 ] 00:07:32.699 }' 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.699 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.959 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:32.959 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.959 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.959 [2024-10-05 08:44:09.430155] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.959 [2024-10-05 08:44:09.430281] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.219 [2024-10-05 08:44:09.432865] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.219 [2024-10-05 08:44:09.432971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.219 [2024-10-05 08:44:09.433030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.219 [2024-10-05 08:44:09.433072] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:33.219 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.219 { 00:07:33.219 "results": [ 00:07:33.219 { 00:07:33.219 "job": "raid_bdev1", 00:07:33.219 "core_mask": "0x1", 00:07:33.219 "workload": "randrw", 00:07:33.219 "percentage": 50, 00:07:33.219 "status": "finished", 00:07:33.219 "queue_depth": 1, 00:07:33.219 "io_size": 131072, 00:07:33.219 "runtime": 1.336566, 00:07:33.219 "iops": 15197.154498917374, 00:07:33.219 "mibps": 1899.6443123646718, 00:07:33.219 "io_failed": 1, 00:07:33.219 "io_timeout": 0, 00:07:33.219 "avg_latency_us": 92.40731306150448, 00:07:33.219 "min_latency_us": 25.041048034934498, 00:07:33.219 "max_latency_us": 1316.4436681222708 00:07:33.219 } 00:07:33.219 ], 00:07:33.219 "core_count": 1 00:07:33.219 } 00:07:33.219 08:44:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61365 00:07:33.219 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61365 ']' 00:07:33.219 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61365 00:07:33.219 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:33.219 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.219 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61365 00:07:33.219 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.219 killing process with pid 61365 00:07:33.219 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.219 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61365' 00:07:33.219 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61365 00:07:33.219 [2024-10-05 08:44:09.482562] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.219 08:44:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61365 00:07:33.219 [2024-10-05 08:44:09.624908] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.602 08:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iiU9X5cSb7 00:07:34.602 08:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:34.602 08:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:34.602 08:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:34.602 ************************************ 00:07:34.602 END TEST raid_write_error_test 00:07:34.602 ************************************ 00:07:34.602 08:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:34.602 08:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.602 08:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.602 08:44:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:34.602 00:07:34.602 real 0m4.521s 00:07:34.602 user 0m5.240s 00:07:34.602 sys 0m0.632s 00:07:34.602 08:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.602 08:44:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.602 08:44:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:34.602 08:44:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:34.602 08:44:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:34.602 08:44:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.602 08:44:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.602 ************************************ 00:07:34.602 START TEST raid_state_function_test 00:07:34.602 ************************************ 00:07:34.602 08:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:34.602 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:34.602 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:34.602 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:34.602 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:34.862 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61479 00:07:34.863 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:34.863 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61479' 00:07:34.863 Process raid pid: 61479 00:07:34.863 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61479 00:07:34.863 08:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61479 ']' 00:07:34.863 08:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.863 08:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.863 08:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.863 08:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.863 08:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.863 [2024-10-05 08:44:11.161595] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:34.863 [2024-10-05 08:44:11.161800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.863 [2024-10-05 08:44:11.317042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.123 [2024-10-05 08:44:11.568457] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.383 [2024-10-05 08:44:11.795645] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.383 [2024-10-05 08:44:11.795684] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.644 [2024-10-05 08:44:11.990993] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:35.644 [2024-10-05 08:44:11.991057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:35.644 [2024-10-05 08:44:11.991068] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.644 [2024-10-05 08:44:11.991094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.644 08:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.644 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.644 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.644 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.644 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.644 "name": "Existed_Raid", 00:07:35.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.644 "strip_size_kb": 64, 00:07:35.644 "state": "configuring", 00:07:35.644 "raid_level": "concat", 00:07:35.644 "superblock": false, 00:07:35.644 "num_base_bdevs": 2, 00:07:35.644 "num_base_bdevs_discovered": 0, 00:07:35.644 "num_base_bdevs_operational": 2, 00:07:35.644 "base_bdevs_list": [ 00:07:35.644 { 00:07:35.644 "name": "BaseBdev1", 00:07:35.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.644 "is_configured": false, 00:07:35.644 "data_offset": 0, 00:07:35.644 "data_size": 0 00:07:35.644 }, 00:07:35.644 { 00:07:35.644 "name": "BaseBdev2", 00:07:35.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.644 "is_configured": false, 00:07:35.644 "data_offset": 0, 00:07:35.644 "data_size": 0 00:07:35.644 } 00:07:35.644 ] 00:07:35.644 }' 00:07:35.644 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.644 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 [2024-10-05 08:44:12.434093] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.238 [2024-10-05 08:44:12.434186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 [2024-10-05 08:44:12.446127] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.238 [2024-10-05 08:44:12.446205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.238 [2024-10-05 08:44:12.446233] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.238 [2024-10-05 08:44:12.446258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 [2024-10-05 08:44:12.520457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.238 BaseBdev1 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.238 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 [ 00:07:36.238 { 00:07:36.238 "name": "BaseBdev1", 00:07:36.238 "aliases": [ 00:07:36.238 "fa7923fa-12f2-4fa6-89c5-767e37c75c8e" 00:07:36.238 ], 00:07:36.238 "product_name": "Malloc disk", 00:07:36.238 "block_size": 512, 00:07:36.238 "num_blocks": 65536, 00:07:36.238 "uuid": "fa7923fa-12f2-4fa6-89c5-767e37c75c8e", 00:07:36.238 "assigned_rate_limits": { 00:07:36.238 "rw_ios_per_sec": 0, 00:07:36.238 "rw_mbytes_per_sec": 0, 00:07:36.238 "r_mbytes_per_sec": 0, 00:07:36.238 "w_mbytes_per_sec": 0 00:07:36.238 }, 00:07:36.238 "claimed": true, 00:07:36.238 "claim_type": "exclusive_write", 00:07:36.238 "zoned": false, 00:07:36.238 "supported_io_types": { 00:07:36.238 "read": true, 00:07:36.238 "write": true, 00:07:36.238 "unmap": true, 00:07:36.238 "flush": true, 00:07:36.238 "reset": true, 00:07:36.238 "nvme_admin": false, 00:07:36.238 "nvme_io": false, 00:07:36.238 "nvme_io_md": false, 00:07:36.238 "write_zeroes": true, 00:07:36.238 "zcopy": true, 00:07:36.238 "get_zone_info": false, 00:07:36.238 "zone_management": false, 00:07:36.238 "zone_append": false, 00:07:36.238 "compare": false, 00:07:36.238 "compare_and_write": false, 00:07:36.238 "abort": true, 00:07:36.238 "seek_hole": false, 00:07:36.238 "seek_data": false, 00:07:36.238 "copy": true, 00:07:36.238 "nvme_iov_md": false 00:07:36.238 }, 00:07:36.238 "memory_domains": [ 00:07:36.238 { 00:07:36.238 "dma_device_id": "system", 00:07:36.238 "dma_device_type": 1 00:07:36.238 }, 00:07:36.238 { 00:07:36.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.239 "dma_device_type": 2 00:07:36.239 } 00:07:36.239 ], 00:07:36.239 "driver_specific": {} 00:07:36.239 } 00:07:36.239 ] 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.239 "name": "Existed_Raid", 00:07:36.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.239 "strip_size_kb": 64, 00:07:36.239 "state": "configuring", 00:07:36.239 "raid_level": "concat", 00:07:36.239 "superblock": false, 00:07:36.239 "num_base_bdevs": 2, 00:07:36.239 "num_base_bdevs_discovered": 1, 00:07:36.239 "num_base_bdevs_operational": 2, 00:07:36.239 "base_bdevs_list": [ 00:07:36.239 { 00:07:36.239 "name": "BaseBdev1", 00:07:36.239 "uuid": "fa7923fa-12f2-4fa6-89c5-767e37c75c8e", 00:07:36.239 "is_configured": true, 00:07:36.239 "data_offset": 0, 00:07:36.239 "data_size": 65536 00:07:36.239 }, 00:07:36.239 { 00:07:36.239 "name": "BaseBdev2", 00:07:36.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.239 "is_configured": false, 00:07:36.239 "data_offset": 0, 00:07:36.239 "data_size": 0 00:07:36.239 } 00:07:36.239 ] 00:07:36.239 }' 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.239 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.500 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.500 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.500 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.500 [2024-10-05 08:44:12.955743] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.500 [2024-10-05 08:44:12.955787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:36.500 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.500 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.500 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.500 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.500 [2024-10-05 08:44:12.967760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.500 [2024-10-05 08:44:12.969809] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.500 [2024-10-05 08:44:12.969854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.761 08:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.761 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.761 "name": "Existed_Raid", 00:07:36.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.761 "strip_size_kb": 64, 00:07:36.761 "state": "configuring", 00:07:36.761 "raid_level": "concat", 00:07:36.761 "superblock": false, 00:07:36.761 "num_base_bdevs": 2, 00:07:36.761 "num_base_bdevs_discovered": 1, 00:07:36.761 "num_base_bdevs_operational": 2, 00:07:36.761 "base_bdevs_list": [ 00:07:36.761 { 00:07:36.761 "name": "BaseBdev1", 00:07:36.761 "uuid": "fa7923fa-12f2-4fa6-89c5-767e37c75c8e", 00:07:36.761 "is_configured": true, 00:07:36.761 "data_offset": 0, 00:07:36.761 "data_size": 65536 00:07:36.761 }, 00:07:36.761 { 00:07:36.761 "name": "BaseBdev2", 00:07:36.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.761 "is_configured": false, 00:07:36.761 "data_offset": 0, 00:07:36.761 "data_size": 0 00:07:36.761 } 00:07:36.761 ] 00:07:36.761 }' 00:07:36.761 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.761 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.022 [2024-10-05 08:44:13.434669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.022 [2024-10-05 08:44:13.434825] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:37.022 [2024-10-05 08:44:13.434850] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:37.022 [2024-10-05 08:44:13.435204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.022 [2024-10-05 08:44:13.435436] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:37.022 [2024-10-05 08:44:13.435483] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:37.022 [2024-10-05 08:44:13.435799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.022 BaseBdev2 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.022 [ 00:07:37.022 { 00:07:37.022 "name": "BaseBdev2", 00:07:37.022 "aliases": [ 00:07:37.022 "a025ef3f-32b1-4bdc-9b93-98c1a8ae8de7" 00:07:37.022 ], 00:07:37.022 "product_name": "Malloc disk", 00:07:37.022 "block_size": 512, 00:07:37.022 "num_blocks": 65536, 00:07:37.022 "uuid": "a025ef3f-32b1-4bdc-9b93-98c1a8ae8de7", 00:07:37.022 "assigned_rate_limits": { 00:07:37.022 "rw_ios_per_sec": 0, 00:07:37.022 "rw_mbytes_per_sec": 0, 00:07:37.022 "r_mbytes_per_sec": 0, 00:07:37.022 "w_mbytes_per_sec": 0 00:07:37.022 }, 00:07:37.022 "claimed": true, 00:07:37.022 "claim_type": "exclusive_write", 00:07:37.022 "zoned": false, 00:07:37.022 "supported_io_types": { 00:07:37.022 "read": true, 00:07:37.022 "write": true, 00:07:37.022 "unmap": true, 00:07:37.022 "flush": true, 00:07:37.022 "reset": true, 00:07:37.022 "nvme_admin": false, 00:07:37.022 "nvme_io": false, 00:07:37.022 "nvme_io_md": false, 00:07:37.022 "write_zeroes": true, 00:07:37.022 "zcopy": true, 00:07:37.022 "get_zone_info": false, 00:07:37.022 "zone_management": false, 00:07:37.022 "zone_append": false, 00:07:37.022 "compare": false, 00:07:37.022 "compare_and_write": false, 00:07:37.022 "abort": true, 00:07:37.022 "seek_hole": false, 00:07:37.022 "seek_data": false, 00:07:37.022 "copy": true, 00:07:37.022 "nvme_iov_md": false 00:07:37.022 }, 00:07:37.022 "memory_domains": [ 00:07:37.022 { 00:07:37.022 "dma_device_id": "system", 00:07:37.022 "dma_device_type": 1 00:07:37.022 }, 00:07:37.022 { 00:07:37.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.022 "dma_device_type": 2 00:07:37.022 } 00:07:37.022 ], 00:07:37.022 "driver_specific": {} 00:07:37.022 } 00:07:37.022 ] 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.022 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.282 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.282 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.282 "name": "Existed_Raid", 00:07:37.282 "uuid": "1f9bb6dd-5005-4939-b439-b64c9897d54c", 00:07:37.282 "strip_size_kb": 64, 00:07:37.282 "state": "online", 00:07:37.282 "raid_level": "concat", 00:07:37.282 "superblock": false, 00:07:37.282 "num_base_bdevs": 2, 00:07:37.282 "num_base_bdevs_discovered": 2, 00:07:37.282 "num_base_bdevs_operational": 2, 00:07:37.282 "base_bdevs_list": [ 00:07:37.282 { 00:07:37.282 "name": "BaseBdev1", 00:07:37.282 "uuid": "fa7923fa-12f2-4fa6-89c5-767e37c75c8e", 00:07:37.282 "is_configured": true, 00:07:37.282 "data_offset": 0, 00:07:37.282 "data_size": 65536 00:07:37.282 }, 00:07:37.282 { 00:07:37.282 "name": "BaseBdev2", 00:07:37.282 "uuid": "a025ef3f-32b1-4bdc-9b93-98c1a8ae8de7", 00:07:37.282 "is_configured": true, 00:07:37.282 "data_offset": 0, 00:07:37.282 "data_size": 65536 00:07:37.282 } 00:07:37.282 ] 00:07:37.282 }' 00:07:37.282 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.283 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.543 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:37.543 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:37.543 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:37.543 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:37.543 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:37.543 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:37.543 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:37.543 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.543 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:37.543 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.543 [2024-10-05 08:44:13.926163] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.543 08:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.543 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:37.543 "name": "Existed_Raid", 00:07:37.543 "aliases": [ 00:07:37.543 "1f9bb6dd-5005-4939-b439-b64c9897d54c" 00:07:37.543 ], 00:07:37.543 "product_name": "Raid Volume", 00:07:37.543 "block_size": 512, 00:07:37.543 "num_blocks": 131072, 00:07:37.543 "uuid": "1f9bb6dd-5005-4939-b439-b64c9897d54c", 00:07:37.543 "assigned_rate_limits": { 00:07:37.543 "rw_ios_per_sec": 0, 00:07:37.543 "rw_mbytes_per_sec": 0, 00:07:37.543 "r_mbytes_per_sec": 0, 00:07:37.544 "w_mbytes_per_sec": 0 00:07:37.544 }, 00:07:37.544 "claimed": false, 00:07:37.544 "zoned": false, 00:07:37.544 "supported_io_types": { 00:07:37.544 "read": true, 00:07:37.544 "write": true, 00:07:37.544 "unmap": true, 00:07:37.544 "flush": true, 00:07:37.544 "reset": true, 00:07:37.544 "nvme_admin": false, 00:07:37.544 "nvme_io": false, 00:07:37.544 "nvme_io_md": false, 00:07:37.544 "write_zeroes": true, 00:07:37.544 "zcopy": false, 00:07:37.544 "get_zone_info": false, 00:07:37.544 "zone_management": false, 00:07:37.544 "zone_append": false, 00:07:37.544 "compare": false, 00:07:37.544 "compare_and_write": false, 00:07:37.544 "abort": false, 00:07:37.544 "seek_hole": false, 00:07:37.544 "seek_data": false, 00:07:37.544 "copy": false, 00:07:37.544 "nvme_iov_md": false 00:07:37.544 }, 00:07:37.544 "memory_domains": [ 00:07:37.544 { 00:07:37.544 "dma_device_id": "system", 00:07:37.544 "dma_device_type": 1 00:07:37.544 }, 00:07:37.544 { 00:07:37.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.544 "dma_device_type": 2 00:07:37.544 }, 00:07:37.544 { 00:07:37.544 "dma_device_id": "system", 00:07:37.544 "dma_device_type": 1 00:07:37.544 }, 00:07:37.544 { 00:07:37.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.544 "dma_device_type": 2 00:07:37.544 } 00:07:37.544 ], 00:07:37.544 "driver_specific": { 00:07:37.544 "raid": { 00:07:37.544 "uuid": "1f9bb6dd-5005-4939-b439-b64c9897d54c", 00:07:37.544 "strip_size_kb": 64, 00:07:37.544 "state": "online", 00:07:37.544 "raid_level": "concat", 00:07:37.544 "superblock": false, 00:07:37.544 "num_base_bdevs": 2, 00:07:37.544 "num_base_bdevs_discovered": 2, 00:07:37.544 "num_base_bdevs_operational": 2, 00:07:37.544 "base_bdevs_list": [ 00:07:37.544 { 00:07:37.544 "name": "BaseBdev1", 00:07:37.544 "uuid": "fa7923fa-12f2-4fa6-89c5-767e37c75c8e", 00:07:37.544 "is_configured": true, 00:07:37.544 "data_offset": 0, 00:07:37.544 "data_size": 65536 00:07:37.544 }, 00:07:37.544 { 00:07:37.544 "name": "BaseBdev2", 00:07:37.544 "uuid": "a025ef3f-32b1-4bdc-9b93-98c1a8ae8de7", 00:07:37.544 "is_configured": true, 00:07:37.544 "data_offset": 0, 00:07:37.544 "data_size": 65536 00:07:37.544 } 00:07:37.544 ] 00:07:37.544 } 00:07:37.544 } 00:07:37.544 }' 00:07:37.544 08:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:37.544 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:37.544 BaseBdev2' 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.804 [2024-10-05 08:44:14.145538] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:37.804 [2024-10-05 08:44:14.145621] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.804 [2024-10-05 08:44:14.145686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.804 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.805 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.805 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.805 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.805 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.805 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.065 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.065 "name": "Existed_Raid", 00:07:38.065 "uuid": "1f9bb6dd-5005-4939-b439-b64c9897d54c", 00:07:38.065 "strip_size_kb": 64, 00:07:38.065 "state": "offline", 00:07:38.065 "raid_level": "concat", 00:07:38.065 "superblock": false, 00:07:38.065 "num_base_bdevs": 2, 00:07:38.065 "num_base_bdevs_discovered": 1, 00:07:38.065 "num_base_bdevs_operational": 1, 00:07:38.065 "base_bdevs_list": [ 00:07:38.065 { 00:07:38.065 "name": null, 00:07:38.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.065 "is_configured": false, 00:07:38.065 "data_offset": 0, 00:07:38.065 "data_size": 65536 00:07:38.065 }, 00:07:38.065 { 00:07:38.065 "name": "BaseBdev2", 00:07:38.065 "uuid": "a025ef3f-32b1-4bdc-9b93-98c1a8ae8de7", 00:07:38.065 "is_configured": true, 00:07:38.065 "data_offset": 0, 00:07:38.065 "data_size": 65536 00:07:38.065 } 00:07:38.065 ] 00:07:38.065 }' 00:07:38.065 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.065 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.325 [2024-10-05 08:44:14.673320] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:38.325 [2024-10-05 08:44:14.673385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.325 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61479 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61479 ']' 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61479 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61479 00:07:38.586 killing process with pid 61479 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61479' 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61479 00:07:38.586 [2024-10-05 08:44:14.873035] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.586 08:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61479 00:07:38.586 [2024-10-05 08:44:14.890599] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:39.969 00:07:39.969 real 0m5.161s 00:07:39.969 user 0m7.115s 00:07:39.969 sys 0m0.943s 00:07:39.969 ************************************ 00:07:39.969 END TEST raid_state_function_test 00:07:39.969 ************************************ 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.969 08:44:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:39.969 08:44:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:39.969 08:44:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.969 08:44:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.969 ************************************ 00:07:39.969 START TEST raid_state_function_test_sb 00:07:39.969 ************************************ 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61702 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:39.969 Process raid pid: 61702 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61702' 00:07:39.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61702 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61702 ']' 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.969 08:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.969 [2024-10-05 08:44:16.399577] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:39.969 [2024-10-05 08:44:16.399784] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.229 [2024-10-05 08:44:16.564826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.489 [2024-10-05 08:44:16.811915] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.749 [2024-10-05 08:44:17.050611] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.749 [2024-10-05 08:44:17.050736] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.010 [2024-10-05 08:44:17.233872] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.010 [2024-10-05 08:44:17.234047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.010 [2024-10-05 08:44:17.234080] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.010 [2024-10-05 08:44:17.234108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.010 "name": "Existed_Raid", 00:07:41.010 "uuid": "0e3c48bf-2bbb-4806-bfaf-bfe506ff9903", 00:07:41.010 "strip_size_kb": 64, 00:07:41.010 "state": "configuring", 00:07:41.010 "raid_level": "concat", 00:07:41.010 "superblock": true, 00:07:41.010 "num_base_bdevs": 2, 00:07:41.010 "num_base_bdevs_discovered": 0, 00:07:41.010 "num_base_bdevs_operational": 2, 00:07:41.010 "base_bdevs_list": [ 00:07:41.010 { 00:07:41.010 "name": "BaseBdev1", 00:07:41.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.010 "is_configured": false, 00:07:41.010 "data_offset": 0, 00:07:41.010 "data_size": 0 00:07:41.010 }, 00:07:41.010 { 00:07:41.010 "name": "BaseBdev2", 00:07:41.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.010 "is_configured": false, 00:07:41.010 "data_offset": 0, 00:07:41.010 "data_size": 0 00:07:41.010 } 00:07:41.010 ] 00:07:41.010 }' 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.010 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.271 [2024-10-05 08:44:17.633073] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.271 [2024-10-05 08:44:17.633186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.271 [2024-10-05 08:44:17.645115] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.271 [2024-10-05 08:44:17.645197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.271 [2024-10-05 08:44:17.645224] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.271 [2024-10-05 08:44:17.645249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.271 [2024-10-05 08:44:17.735565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.271 BaseBdev1 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.271 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.531 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.532 [ 00:07:41.532 { 00:07:41.532 "name": "BaseBdev1", 00:07:41.532 "aliases": [ 00:07:41.532 "0b7ce2b8-1ed9-4c87-b9ec-a0a4b02d346d" 00:07:41.532 ], 00:07:41.532 "product_name": "Malloc disk", 00:07:41.532 "block_size": 512, 00:07:41.532 "num_blocks": 65536, 00:07:41.532 "uuid": "0b7ce2b8-1ed9-4c87-b9ec-a0a4b02d346d", 00:07:41.532 "assigned_rate_limits": { 00:07:41.532 "rw_ios_per_sec": 0, 00:07:41.532 "rw_mbytes_per_sec": 0, 00:07:41.532 "r_mbytes_per_sec": 0, 00:07:41.532 "w_mbytes_per_sec": 0 00:07:41.532 }, 00:07:41.532 "claimed": true, 00:07:41.532 "claim_type": "exclusive_write", 00:07:41.532 "zoned": false, 00:07:41.532 "supported_io_types": { 00:07:41.532 "read": true, 00:07:41.532 "write": true, 00:07:41.532 "unmap": true, 00:07:41.532 "flush": true, 00:07:41.532 "reset": true, 00:07:41.532 "nvme_admin": false, 00:07:41.532 "nvme_io": false, 00:07:41.532 "nvme_io_md": false, 00:07:41.532 "write_zeroes": true, 00:07:41.532 "zcopy": true, 00:07:41.532 "get_zone_info": false, 00:07:41.532 "zone_management": false, 00:07:41.532 "zone_append": false, 00:07:41.532 "compare": false, 00:07:41.532 "compare_and_write": false, 00:07:41.532 "abort": true, 00:07:41.532 "seek_hole": false, 00:07:41.532 "seek_data": false, 00:07:41.532 "copy": true, 00:07:41.532 "nvme_iov_md": false 00:07:41.532 }, 00:07:41.532 "memory_domains": [ 00:07:41.532 { 00:07:41.532 "dma_device_id": "system", 00:07:41.532 "dma_device_type": 1 00:07:41.532 }, 00:07:41.532 { 00:07:41.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.532 "dma_device_type": 2 00:07:41.532 } 00:07:41.532 ], 00:07:41.532 "driver_specific": {} 00:07:41.532 } 00:07:41.532 ] 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.532 "name": "Existed_Raid", 00:07:41.532 "uuid": "8a0e53c1-5df9-463f-b64f-3522cd98d4d5", 00:07:41.532 "strip_size_kb": 64, 00:07:41.532 "state": "configuring", 00:07:41.532 "raid_level": "concat", 00:07:41.532 "superblock": true, 00:07:41.532 "num_base_bdevs": 2, 00:07:41.532 "num_base_bdevs_discovered": 1, 00:07:41.532 "num_base_bdevs_operational": 2, 00:07:41.532 "base_bdevs_list": [ 00:07:41.532 { 00:07:41.532 "name": "BaseBdev1", 00:07:41.532 "uuid": "0b7ce2b8-1ed9-4c87-b9ec-a0a4b02d346d", 00:07:41.532 "is_configured": true, 00:07:41.532 "data_offset": 2048, 00:07:41.532 "data_size": 63488 00:07:41.532 }, 00:07:41.532 { 00:07:41.532 "name": "BaseBdev2", 00:07:41.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.532 "is_configured": false, 00:07:41.532 "data_offset": 0, 00:07:41.532 "data_size": 0 00:07:41.532 } 00:07:41.532 ] 00:07:41.532 }' 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.532 08:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.792 [2024-10-05 08:44:18.182832] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.792 [2024-10-05 08:44:18.182945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.792 [2024-10-05 08:44:18.190867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.792 [2024-10-05 08:44:18.192901] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.792 [2024-10-05 08:44:18.192949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.792 "name": "Existed_Raid", 00:07:41.792 "uuid": "2569f7c1-c113-44cf-a5b4-79b9adb2b184", 00:07:41.792 "strip_size_kb": 64, 00:07:41.792 "state": "configuring", 00:07:41.792 "raid_level": "concat", 00:07:41.792 "superblock": true, 00:07:41.792 "num_base_bdevs": 2, 00:07:41.792 "num_base_bdevs_discovered": 1, 00:07:41.792 "num_base_bdevs_operational": 2, 00:07:41.792 "base_bdevs_list": [ 00:07:41.792 { 00:07:41.792 "name": "BaseBdev1", 00:07:41.792 "uuid": "0b7ce2b8-1ed9-4c87-b9ec-a0a4b02d346d", 00:07:41.792 "is_configured": true, 00:07:41.792 "data_offset": 2048, 00:07:41.792 "data_size": 63488 00:07:41.792 }, 00:07:41.792 { 00:07:41.792 "name": "BaseBdev2", 00:07:41.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.792 "is_configured": false, 00:07:41.792 "data_offset": 0, 00:07:41.792 "data_size": 0 00:07:41.792 } 00:07:41.792 ] 00:07:41.792 }' 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.792 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.360 [2024-10-05 08:44:18.642307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.360 [2024-10-05 08:44:18.642691] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:42.360 [2024-10-05 08:44:18.642753] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.360 [2024-10-05 08:44:18.643074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.360 [2024-10-05 08:44:18.643268] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:42.360 [2024-10-05 08:44:18.643315] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:42.360 BaseBdev2 00:07:42.360 [2024-10-05 08:44:18.643516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.360 [ 00:07:42.360 { 00:07:42.360 "name": "BaseBdev2", 00:07:42.360 "aliases": [ 00:07:42.360 "efa0091a-d778-4446-b74c-03d66848391e" 00:07:42.360 ], 00:07:42.360 "product_name": "Malloc disk", 00:07:42.360 "block_size": 512, 00:07:42.360 "num_blocks": 65536, 00:07:42.360 "uuid": "efa0091a-d778-4446-b74c-03d66848391e", 00:07:42.360 "assigned_rate_limits": { 00:07:42.360 "rw_ios_per_sec": 0, 00:07:42.360 "rw_mbytes_per_sec": 0, 00:07:42.360 "r_mbytes_per_sec": 0, 00:07:42.360 "w_mbytes_per_sec": 0 00:07:42.360 }, 00:07:42.360 "claimed": true, 00:07:42.360 "claim_type": "exclusive_write", 00:07:42.360 "zoned": false, 00:07:42.360 "supported_io_types": { 00:07:42.360 "read": true, 00:07:42.360 "write": true, 00:07:42.360 "unmap": true, 00:07:42.360 "flush": true, 00:07:42.360 "reset": true, 00:07:42.360 "nvme_admin": false, 00:07:42.360 "nvme_io": false, 00:07:42.360 "nvme_io_md": false, 00:07:42.360 "write_zeroes": true, 00:07:42.360 "zcopy": true, 00:07:42.360 "get_zone_info": false, 00:07:42.360 "zone_management": false, 00:07:42.360 "zone_append": false, 00:07:42.360 "compare": false, 00:07:42.360 "compare_and_write": false, 00:07:42.360 "abort": true, 00:07:42.360 "seek_hole": false, 00:07:42.360 "seek_data": false, 00:07:42.360 "copy": true, 00:07:42.360 "nvme_iov_md": false 00:07:42.360 }, 00:07:42.360 "memory_domains": [ 00:07:42.360 { 00:07:42.360 "dma_device_id": "system", 00:07:42.360 "dma_device_type": 1 00:07:42.360 }, 00:07:42.360 { 00:07:42.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.360 "dma_device_type": 2 00:07:42.360 } 00:07:42.360 ], 00:07:42.360 "driver_specific": {} 00:07:42.360 } 00:07:42.360 ] 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.360 "name": "Existed_Raid", 00:07:42.360 "uuid": "2569f7c1-c113-44cf-a5b4-79b9adb2b184", 00:07:42.360 "strip_size_kb": 64, 00:07:42.360 "state": "online", 00:07:42.360 "raid_level": "concat", 00:07:42.360 "superblock": true, 00:07:42.360 "num_base_bdevs": 2, 00:07:42.360 "num_base_bdevs_discovered": 2, 00:07:42.360 "num_base_bdevs_operational": 2, 00:07:42.360 "base_bdevs_list": [ 00:07:42.360 { 00:07:42.360 "name": "BaseBdev1", 00:07:42.360 "uuid": "0b7ce2b8-1ed9-4c87-b9ec-a0a4b02d346d", 00:07:42.360 "is_configured": true, 00:07:42.360 "data_offset": 2048, 00:07:42.360 "data_size": 63488 00:07:42.360 }, 00:07:42.360 { 00:07:42.360 "name": "BaseBdev2", 00:07:42.360 "uuid": "efa0091a-d778-4446-b74c-03d66848391e", 00:07:42.360 "is_configured": true, 00:07:42.360 "data_offset": 2048, 00:07:42.360 "data_size": 63488 00:07:42.360 } 00:07:42.360 ] 00:07:42.360 }' 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.360 08:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.620 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:42.620 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:42.620 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:42.620 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:42.620 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:42.620 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:42.620 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:42.620 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.620 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.620 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:42.620 [2024-10-05 08:44:19.069846] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.620 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:42.880 "name": "Existed_Raid", 00:07:42.880 "aliases": [ 00:07:42.880 "2569f7c1-c113-44cf-a5b4-79b9adb2b184" 00:07:42.880 ], 00:07:42.880 "product_name": "Raid Volume", 00:07:42.880 "block_size": 512, 00:07:42.880 "num_blocks": 126976, 00:07:42.880 "uuid": "2569f7c1-c113-44cf-a5b4-79b9adb2b184", 00:07:42.880 "assigned_rate_limits": { 00:07:42.880 "rw_ios_per_sec": 0, 00:07:42.880 "rw_mbytes_per_sec": 0, 00:07:42.880 "r_mbytes_per_sec": 0, 00:07:42.880 "w_mbytes_per_sec": 0 00:07:42.880 }, 00:07:42.880 "claimed": false, 00:07:42.880 "zoned": false, 00:07:42.880 "supported_io_types": { 00:07:42.880 "read": true, 00:07:42.880 "write": true, 00:07:42.880 "unmap": true, 00:07:42.880 "flush": true, 00:07:42.880 "reset": true, 00:07:42.880 "nvme_admin": false, 00:07:42.880 "nvme_io": false, 00:07:42.880 "nvme_io_md": false, 00:07:42.880 "write_zeroes": true, 00:07:42.880 "zcopy": false, 00:07:42.880 "get_zone_info": false, 00:07:42.880 "zone_management": false, 00:07:42.880 "zone_append": false, 00:07:42.880 "compare": false, 00:07:42.880 "compare_and_write": false, 00:07:42.880 "abort": false, 00:07:42.880 "seek_hole": false, 00:07:42.880 "seek_data": false, 00:07:42.880 "copy": false, 00:07:42.880 "nvme_iov_md": false 00:07:42.880 }, 00:07:42.880 "memory_domains": [ 00:07:42.880 { 00:07:42.880 "dma_device_id": "system", 00:07:42.880 "dma_device_type": 1 00:07:42.880 }, 00:07:42.880 { 00:07:42.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.880 "dma_device_type": 2 00:07:42.880 }, 00:07:42.880 { 00:07:42.880 "dma_device_id": "system", 00:07:42.880 "dma_device_type": 1 00:07:42.880 }, 00:07:42.880 { 00:07:42.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.880 "dma_device_type": 2 00:07:42.880 } 00:07:42.880 ], 00:07:42.880 "driver_specific": { 00:07:42.880 "raid": { 00:07:42.880 "uuid": "2569f7c1-c113-44cf-a5b4-79b9adb2b184", 00:07:42.880 "strip_size_kb": 64, 00:07:42.880 "state": "online", 00:07:42.880 "raid_level": "concat", 00:07:42.880 "superblock": true, 00:07:42.880 "num_base_bdevs": 2, 00:07:42.880 "num_base_bdevs_discovered": 2, 00:07:42.880 "num_base_bdevs_operational": 2, 00:07:42.880 "base_bdevs_list": [ 00:07:42.880 { 00:07:42.880 "name": "BaseBdev1", 00:07:42.880 "uuid": "0b7ce2b8-1ed9-4c87-b9ec-a0a4b02d346d", 00:07:42.880 "is_configured": true, 00:07:42.880 "data_offset": 2048, 00:07:42.880 "data_size": 63488 00:07:42.880 }, 00:07:42.880 { 00:07:42.880 "name": "BaseBdev2", 00:07:42.880 "uuid": "efa0091a-d778-4446-b74c-03d66848391e", 00:07:42.880 "is_configured": true, 00:07:42.880 "data_offset": 2048, 00:07:42.880 "data_size": 63488 00:07:42.880 } 00:07:42.880 ] 00:07:42.880 } 00:07:42.880 } 00:07:42.880 }' 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:42.880 BaseBdev2' 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.880 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.880 [2024-10-05 08:44:19.301232] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:42.880 [2024-10-05 08:44:19.301306] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:42.880 [2024-10-05 08:44:19.301374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.141 "name": "Existed_Raid", 00:07:43.141 "uuid": "2569f7c1-c113-44cf-a5b4-79b9adb2b184", 00:07:43.141 "strip_size_kb": 64, 00:07:43.141 "state": "offline", 00:07:43.141 "raid_level": "concat", 00:07:43.141 "superblock": true, 00:07:43.141 "num_base_bdevs": 2, 00:07:43.141 "num_base_bdevs_discovered": 1, 00:07:43.141 "num_base_bdevs_operational": 1, 00:07:43.141 "base_bdevs_list": [ 00:07:43.141 { 00:07:43.141 "name": null, 00:07:43.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.141 "is_configured": false, 00:07:43.141 "data_offset": 0, 00:07:43.141 "data_size": 63488 00:07:43.141 }, 00:07:43.141 { 00:07:43.141 "name": "BaseBdev2", 00:07:43.141 "uuid": "efa0091a-d778-4446-b74c-03d66848391e", 00:07:43.141 "is_configured": true, 00:07:43.141 "data_offset": 2048, 00:07:43.141 "data_size": 63488 00:07:43.141 } 00:07:43.141 ] 00:07:43.141 }' 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.141 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.400 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:43.401 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.401 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.401 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.401 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.401 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.401 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.401 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.401 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.401 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:43.401 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.401 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.401 [2024-10-05 08:44:19.851601] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:43.401 [2024-10-05 08:44:19.851663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:43.661 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.661 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.661 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.661 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.661 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:43.661 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.661 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.661 08:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.661 08:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61702 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61702 ']' 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61702 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61702 00:07:43.661 killing process with pid 61702 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61702' 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61702 00:07:43.661 [2024-10-05 08:44:20.044295] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.661 08:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61702 00:07:43.661 [2024-10-05 08:44:20.061138] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.062 ************************************ 00:07:45.062 END TEST raid_state_function_test_sb 00:07:45.062 ************************************ 00:07:45.062 08:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:45.062 00:07:45.062 real 0m5.104s 00:07:45.062 user 0m7.035s 00:07:45.062 sys 0m0.893s 00:07:45.062 08:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.062 08:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.062 08:44:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:45.062 08:44:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:45.062 08:44:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.062 08:44:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.062 ************************************ 00:07:45.062 START TEST raid_superblock_test 00:07:45.062 ************************************ 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61924 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61924 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61924 ']' 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.062 08:44:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.322 [2024-10-05 08:44:21.570273] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:45.322 [2024-10-05 08:44:21.570394] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61924 ] 00:07:45.322 [2024-10-05 08:44:21.737374] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.582 [2024-10-05 08:44:21.978028] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.842 [2024-10-05 08:44:22.204317] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.843 [2024-10-05 08:44:22.204354] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.103 malloc1 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.103 [2024-10-05 08:44:22.454408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:46.103 [2024-10-05 08:44:22.454564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.103 [2024-10-05 08:44:22.454613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:46.103 [2024-10-05 08:44:22.454647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.103 [2024-10-05 08:44:22.457066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.103 [2024-10-05 08:44:22.457141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:46.103 pt1 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.103 malloc2 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.103 [2024-10-05 08:44:22.528952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:46.103 [2024-10-05 08:44:22.529027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.103 [2024-10-05 08:44:22.529055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:46.103 [2024-10-05 08:44:22.529064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.103 [2024-10-05 08:44:22.531471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.103 [2024-10-05 08:44:22.531507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:46.103 pt2 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.103 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.103 [2024-10-05 08:44:22.541003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:46.103 [2024-10-05 08:44:22.543037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.103 [2024-10-05 08:44:22.543202] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:46.103 [2024-10-05 08:44:22.543215] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.103 [2024-10-05 08:44:22.543469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:46.104 [2024-10-05 08:44:22.543626] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:46.104 [2024-10-05 08:44:22.543638] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:46.104 [2024-10-05 08:44:22.543780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.104 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.364 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.364 "name": "raid_bdev1", 00:07:46.364 "uuid": "54616c12-cf8b-45ad-be6f-a98f3393d192", 00:07:46.364 "strip_size_kb": 64, 00:07:46.364 "state": "online", 00:07:46.364 "raid_level": "concat", 00:07:46.364 "superblock": true, 00:07:46.364 "num_base_bdevs": 2, 00:07:46.364 "num_base_bdevs_discovered": 2, 00:07:46.364 "num_base_bdevs_operational": 2, 00:07:46.364 "base_bdevs_list": [ 00:07:46.364 { 00:07:46.364 "name": "pt1", 00:07:46.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.364 "is_configured": true, 00:07:46.364 "data_offset": 2048, 00:07:46.364 "data_size": 63488 00:07:46.364 }, 00:07:46.364 { 00:07:46.364 "name": "pt2", 00:07:46.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.364 "is_configured": true, 00:07:46.364 "data_offset": 2048, 00:07:46.364 "data_size": 63488 00:07:46.364 } 00:07:46.364 ] 00:07:46.364 }' 00:07:46.364 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.364 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.624 [2024-10-05 08:44:22.952526] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.624 "name": "raid_bdev1", 00:07:46.624 "aliases": [ 00:07:46.624 "54616c12-cf8b-45ad-be6f-a98f3393d192" 00:07:46.624 ], 00:07:46.624 "product_name": "Raid Volume", 00:07:46.624 "block_size": 512, 00:07:46.624 "num_blocks": 126976, 00:07:46.624 "uuid": "54616c12-cf8b-45ad-be6f-a98f3393d192", 00:07:46.624 "assigned_rate_limits": { 00:07:46.624 "rw_ios_per_sec": 0, 00:07:46.624 "rw_mbytes_per_sec": 0, 00:07:46.624 "r_mbytes_per_sec": 0, 00:07:46.624 "w_mbytes_per_sec": 0 00:07:46.624 }, 00:07:46.624 "claimed": false, 00:07:46.624 "zoned": false, 00:07:46.624 "supported_io_types": { 00:07:46.624 "read": true, 00:07:46.624 "write": true, 00:07:46.624 "unmap": true, 00:07:46.624 "flush": true, 00:07:46.624 "reset": true, 00:07:46.624 "nvme_admin": false, 00:07:46.624 "nvme_io": false, 00:07:46.624 "nvme_io_md": false, 00:07:46.624 "write_zeroes": true, 00:07:46.624 "zcopy": false, 00:07:46.624 "get_zone_info": false, 00:07:46.624 "zone_management": false, 00:07:46.624 "zone_append": false, 00:07:46.624 "compare": false, 00:07:46.624 "compare_and_write": false, 00:07:46.624 "abort": false, 00:07:46.624 "seek_hole": false, 00:07:46.624 "seek_data": false, 00:07:46.624 "copy": false, 00:07:46.624 "nvme_iov_md": false 00:07:46.624 }, 00:07:46.624 "memory_domains": [ 00:07:46.624 { 00:07:46.624 "dma_device_id": "system", 00:07:46.624 "dma_device_type": 1 00:07:46.624 }, 00:07:46.624 { 00:07:46.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.624 "dma_device_type": 2 00:07:46.624 }, 00:07:46.624 { 00:07:46.624 "dma_device_id": "system", 00:07:46.624 "dma_device_type": 1 00:07:46.624 }, 00:07:46.624 { 00:07:46.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.624 "dma_device_type": 2 00:07:46.624 } 00:07:46.624 ], 00:07:46.624 "driver_specific": { 00:07:46.624 "raid": { 00:07:46.624 "uuid": "54616c12-cf8b-45ad-be6f-a98f3393d192", 00:07:46.624 "strip_size_kb": 64, 00:07:46.624 "state": "online", 00:07:46.624 "raid_level": "concat", 00:07:46.624 "superblock": true, 00:07:46.624 "num_base_bdevs": 2, 00:07:46.624 "num_base_bdevs_discovered": 2, 00:07:46.624 "num_base_bdevs_operational": 2, 00:07:46.624 "base_bdevs_list": [ 00:07:46.624 { 00:07:46.624 "name": "pt1", 00:07:46.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.624 "is_configured": true, 00:07:46.624 "data_offset": 2048, 00:07:46.624 "data_size": 63488 00:07:46.624 }, 00:07:46.624 { 00:07:46.624 "name": "pt2", 00:07:46.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.624 "is_configured": true, 00:07:46.624 "data_offset": 2048, 00:07:46.624 "data_size": 63488 00:07:46.624 } 00:07:46.624 ] 00:07:46.624 } 00:07:46.624 } 00:07:46.624 }' 00:07:46.624 08:44:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.624 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:46.624 pt2' 00:07:46.624 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.624 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.624 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.624 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:46.624 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.624 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.624 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.624 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:46.885 [2024-10-05 08:44:23.160067] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=54616c12-cf8b-45ad-be6f-a98f3393d192 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 54616c12-cf8b-45ad-be6f-a98f3393d192 ']' 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 [2024-10-05 08:44:23.203752] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.885 [2024-10-05 08:44:23.203817] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.885 [2024-10-05 08:44:23.203927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.885 [2024-10-05 08:44:23.204002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.885 [2024-10-05 08:44:23.204055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 [2024-10-05 08:44:23.323547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:46.885 [2024-10-05 08:44:23.325627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:46.885 [2024-10-05 08:44:23.325695] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:46.885 [2024-10-05 08:44:23.325753] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:46.885 [2024-10-05 08:44:23.325769] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.885 [2024-10-05 08:44:23.325779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:46.885 request: 00:07:46.885 { 00:07:46.885 "name": "raid_bdev1", 00:07:46.885 "raid_level": "concat", 00:07:46.885 "base_bdevs": [ 00:07:46.885 "malloc1", 00:07:46.885 "malloc2" 00:07:46.885 ], 00:07:46.885 "strip_size_kb": 64, 00:07:46.885 "superblock": false, 00:07:46.885 "method": "bdev_raid_create", 00:07:46.885 "req_id": 1 00:07:46.885 } 00:07:46.885 Got JSON-RPC error response 00:07:46.885 response: 00:07:46.885 { 00:07:46.885 "code": -17, 00:07:46.885 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:46.885 } 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:46.885 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.146 [2024-10-05 08:44:23.391404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:47.146 [2024-10-05 08:44:23.391492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.146 [2024-10-05 08:44:23.391543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:47.146 [2024-10-05 08:44:23.391573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.146 [2024-10-05 08:44:23.394041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.146 [2024-10-05 08:44:23.394109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:47.146 [2024-10-05 08:44:23.394199] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:47.146 [2024-10-05 08:44:23.394287] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:47.146 pt1 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.146 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.147 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.147 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.147 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.147 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.147 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.147 "name": "raid_bdev1", 00:07:47.147 "uuid": "54616c12-cf8b-45ad-be6f-a98f3393d192", 00:07:47.147 "strip_size_kb": 64, 00:07:47.147 "state": "configuring", 00:07:47.147 "raid_level": "concat", 00:07:47.147 "superblock": true, 00:07:47.147 "num_base_bdevs": 2, 00:07:47.147 "num_base_bdevs_discovered": 1, 00:07:47.147 "num_base_bdevs_operational": 2, 00:07:47.147 "base_bdevs_list": [ 00:07:47.147 { 00:07:47.147 "name": "pt1", 00:07:47.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.147 "is_configured": true, 00:07:47.147 "data_offset": 2048, 00:07:47.147 "data_size": 63488 00:07:47.147 }, 00:07:47.147 { 00:07:47.147 "name": null, 00:07:47.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.147 "is_configured": false, 00:07:47.147 "data_offset": 2048, 00:07:47.147 "data_size": 63488 00:07:47.147 } 00:07:47.147 ] 00:07:47.147 }' 00:07:47.147 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.147 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.407 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:47.407 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:47.407 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.407 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:47.407 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.407 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.407 [2024-10-05 08:44:23.806681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:47.408 [2024-10-05 08:44:23.806744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.408 [2024-10-05 08:44:23.806766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:47.408 [2024-10-05 08:44:23.806778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.408 [2024-10-05 08:44:23.807262] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.408 [2024-10-05 08:44:23.807295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:47.408 [2024-10-05 08:44:23.807366] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:47.408 [2024-10-05 08:44:23.807389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.408 [2024-10-05 08:44:23.807518] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:47.408 [2024-10-05 08:44:23.807529] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:47.408 [2024-10-05 08:44:23.807772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:47.408 [2024-10-05 08:44:23.807909] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:47.408 [2024-10-05 08:44:23.807918] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:47.408 [2024-10-05 08:44:23.808067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.408 pt2 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.408 "name": "raid_bdev1", 00:07:47.408 "uuid": "54616c12-cf8b-45ad-be6f-a98f3393d192", 00:07:47.408 "strip_size_kb": 64, 00:07:47.408 "state": "online", 00:07:47.408 "raid_level": "concat", 00:07:47.408 "superblock": true, 00:07:47.408 "num_base_bdevs": 2, 00:07:47.408 "num_base_bdevs_discovered": 2, 00:07:47.408 "num_base_bdevs_operational": 2, 00:07:47.408 "base_bdevs_list": [ 00:07:47.408 { 00:07:47.408 "name": "pt1", 00:07:47.408 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.408 "is_configured": true, 00:07:47.408 "data_offset": 2048, 00:07:47.408 "data_size": 63488 00:07:47.408 }, 00:07:47.408 { 00:07:47.408 "name": "pt2", 00:07:47.408 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.408 "is_configured": true, 00:07:47.408 "data_offset": 2048, 00:07:47.408 "data_size": 63488 00:07:47.408 } 00:07:47.408 ] 00:07:47.408 }' 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.408 08:44:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.979 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:47.979 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:47.979 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.979 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.979 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.979 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.979 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.979 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.979 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.979 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.979 [2024-10-05 08:44:24.190310] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.979 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.979 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.979 "name": "raid_bdev1", 00:07:47.979 "aliases": [ 00:07:47.979 "54616c12-cf8b-45ad-be6f-a98f3393d192" 00:07:47.979 ], 00:07:47.979 "product_name": "Raid Volume", 00:07:47.979 "block_size": 512, 00:07:47.979 "num_blocks": 126976, 00:07:47.979 "uuid": "54616c12-cf8b-45ad-be6f-a98f3393d192", 00:07:47.979 "assigned_rate_limits": { 00:07:47.979 "rw_ios_per_sec": 0, 00:07:47.979 "rw_mbytes_per_sec": 0, 00:07:47.979 "r_mbytes_per_sec": 0, 00:07:47.979 "w_mbytes_per_sec": 0 00:07:47.979 }, 00:07:47.979 "claimed": false, 00:07:47.979 "zoned": false, 00:07:47.979 "supported_io_types": { 00:07:47.979 "read": true, 00:07:47.979 "write": true, 00:07:47.979 "unmap": true, 00:07:47.979 "flush": true, 00:07:47.979 "reset": true, 00:07:47.979 "nvme_admin": false, 00:07:47.979 "nvme_io": false, 00:07:47.979 "nvme_io_md": false, 00:07:47.979 "write_zeroes": true, 00:07:47.979 "zcopy": false, 00:07:47.979 "get_zone_info": false, 00:07:47.979 "zone_management": false, 00:07:47.979 "zone_append": false, 00:07:47.979 "compare": false, 00:07:47.979 "compare_and_write": false, 00:07:47.979 "abort": false, 00:07:47.979 "seek_hole": false, 00:07:47.979 "seek_data": false, 00:07:47.979 "copy": false, 00:07:47.979 "nvme_iov_md": false 00:07:47.979 }, 00:07:47.979 "memory_domains": [ 00:07:47.979 { 00:07:47.979 "dma_device_id": "system", 00:07:47.979 "dma_device_type": 1 00:07:47.979 }, 00:07:47.979 { 00:07:47.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.980 "dma_device_type": 2 00:07:47.980 }, 00:07:47.980 { 00:07:47.980 "dma_device_id": "system", 00:07:47.980 "dma_device_type": 1 00:07:47.980 }, 00:07:47.980 { 00:07:47.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.980 "dma_device_type": 2 00:07:47.980 } 00:07:47.980 ], 00:07:47.980 "driver_specific": { 00:07:47.980 "raid": { 00:07:47.980 "uuid": "54616c12-cf8b-45ad-be6f-a98f3393d192", 00:07:47.980 "strip_size_kb": 64, 00:07:47.980 "state": "online", 00:07:47.980 "raid_level": "concat", 00:07:47.980 "superblock": true, 00:07:47.980 "num_base_bdevs": 2, 00:07:47.980 "num_base_bdevs_discovered": 2, 00:07:47.980 "num_base_bdevs_operational": 2, 00:07:47.980 "base_bdevs_list": [ 00:07:47.980 { 00:07:47.980 "name": "pt1", 00:07:47.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.980 "is_configured": true, 00:07:47.980 "data_offset": 2048, 00:07:47.980 "data_size": 63488 00:07:47.980 }, 00:07:47.980 { 00:07:47.980 "name": "pt2", 00:07:47.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.980 "is_configured": true, 00:07:47.980 "data_offset": 2048, 00:07:47.980 "data_size": 63488 00:07:47.980 } 00:07:47.980 ] 00:07:47.980 } 00:07:47.980 } 00:07:47.980 }' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:47.980 pt2' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:47.980 [2024-10-05 08:44:24.369968] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 54616c12-cf8b-45ad-be6f-a98f3393d192 '!=' 54616c12-cf8b-45ad-be6f-a98f3393d192 ']' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61924 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61924 ']' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61924 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61924 00:07:47.980 killing process with pid 61924 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61924' 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61924 00:07:47.980 [2024-10-05 08:44:24.448396] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.980 08:44:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61924 00:07:47.980 [2024-10-05 08:44:24.448477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.980 [2024-10-05 08:44:24.448520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.980 [2024-10-05 08:44:24.448531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:48.240 [2024-10-05 08:44:24.660244] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.624 08:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:49.624 00:07:49.624 real 0m4.540s 00:07:49.624 user 0m6.009s 00:07:49.624 sys 0m0.835s 00:07:49.624 ************************************ 00:07:49.624 END TEST raid_superblock_test 00:07:49.624 ************************************ 00:07:49.624 08:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.624 08:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.624 08:44:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:49.624 08:44:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:49.624 08:44:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.624 08:44:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.624 ************************************ 00:07:49.624 START TEST raid_read_error_test 00:07:49.624 ************************************ 00:07:49.624 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:49.624 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:49.624 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:49.624 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uip01sVsn7 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62106 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62106 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62106 ']' 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.885 08:44:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.885 [2024-10-05 08:44:26.219526] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:49.885 [2024-10-05 08:44:26.219824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62106 ] 00:07:50.146 [2024-10-05 08:44:26.403144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.406 [2024-10-05 08:44:26.645819] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.406 [2024-10-05 08:44:26.871504] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.406 [2024-10-05 08:44:26.871651] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.667 BaseBdev1_malloc 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.667 true 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.667 [2024-10-05 08:44:27.095360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:50.667 [2024-10-05 08:44:27.095424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.667 [2024-10-05 08:44:27.095441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:50.667 [2024-10-05 08:44:27.095453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.667 [2024-10-05 08:44:27.097794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.667 [2024-10-05 08:44:27.097909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:50.667 BaseBdev1 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.667 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.929 BaseBdev2_malloc 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.929 true 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.929 [2024-10-05 08:44:27.175674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:50.929 [2024-10-05 08:44:27.175729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.929 [2024-10-05 08:44:27.175744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:50.929 [2024-10-05 08:44:27.175755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.929 [2024-10-05 08:44:27.178060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.929 [2024-10-05 08:44:27.178097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:50.929 BaseBdev2 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.929 [2024-10-05 08:44:27.187730] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.929 [2024-10-05 08:44:27.189777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.929 [2024-10-05 08:44:27.190059] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.929 [2024-10-05 08:44:27.190079] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.929 [2024-10-05 08:44:27.190302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:50.929 [2024-10-05 08:44:27.190462] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.929 [2024-10-05 08:44:27.190471] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:50.929 [2024-10-05 08:44:27.190623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.929 "name": "raid_bdev1", 00:07:50.929 "uuid": "bcf61a66-7e7f-40fd-86b0-e36b25077ca6", 00:07:50.929 "strip_size_kb": 64, 00:07:50.929 "state": "online", 00:07:50.929 "raid_level": "concat", 00:07:50.929 "superblock": true, 00:07:50.929 "num_base_bdevs": 2, 00:07:50.929 "num_base_bdevs_discovered": 2, 00:07:50.929 "num_base_bdevs_operational": 2, 00:07:50.929 "base_bdevs_list": [ 00:07:50.929 { 00:07:50.929 "name": "BaseBdev1", 00:07:50.929 "uuid": "d39d1560-3d83-5f0f-897d-fd8d25dba814", 00:07:50.929 "is_configured": true, 00:07:50.929 "data_offset": 2048, 00:07:50.929 "data_size": 63488 00:07:50.929 }, 00:07:50.929 { 00:07:50.929 "name": "BaseBdev2", 00:07:50.929 "uuid": "10bb0445-781d-5e07-a601-6eb3c46c4ecc", 00:07:50.929 "is_configured": true, 00:07:50.929 "data_offset": 2048, 00:07:50.929 "data_size": 63488 00:07:50.929 } 00:07:50.929 ] 00:07:50.929 }' 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.929 08:44:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.189 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:51.189 08:44:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:51.449 [2024-10-05 08:44:27.684285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.413 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.413 "name": "raid_bdev1", 00:07:52.413 "uuid": "bcf61a66-7e7f-40fd-86b0-e36b25077ca6", 00:07:52.413 "strip_size_kb": 64, 00:07:52.413 "state": "online", 00:07:52.413 "raid_level": "concat", 00:07:52.413 "superblock": true, 00:07:52.413 "num_base_bdevs": 2, 00:07:52.413 "num_base_bdevs_discovered": 2, 00:07:52.413 "num_base_bdevs_operational": 2, 00:07:52.413 "base_bdevs_list": [ 00:07:52.413 { 00:07:52.413 "name": "BaseBdev1", 00:07:52.413 "uuid": "d39d1560-3d83-5f0f-897d-fd8d25dba814", 00:07:52.413 "is_configured": true, 00:07:52.413 "data_offset": 2048, 00:07:52.414 "data_size": 63488 00:07:52.414 }, 00:07:52.414 { 00:07:52.414 "name": "BaseBdev2", 00:07:52.414 "uuid": "10bb0445-781d-5e07-a601-6eb3c46c4ecc", 00:07:52.414 "is_configured": true, 00:07:52.414 "data_offset": 2048, 00:07:52.414 "data_size": 63488 00:07:52.414 } 00:07:52.414 ] 00:07:52.414 }' 00:07:52.414 08:44:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.414 08:44:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.674 [2024-10-05 08:44:29.040688] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.674 [2024-10-05 08:44:29.040836] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.674 [2024-10-05 08:44:29.043378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.674 [2024-10-05 08:44:29.043466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.674 [2024-10-05 08:44:29.043522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.674 [2024-10-05 08:44:29.043562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:52.674 { 00:07:52.674 "results": [ 00:07:52.674 { 00:07:52.674 "job": "raid_bdev1", 00:07:52.674 "core_mask": "0x1", 00:07:52.674 "workload": "randrw", 00:07:52.674 "percentage": 50, 00:07:52.674 "status": "finished", 00:07:52.674 "queue_depth": 1, 00:07:52.674 "io_size": 131072, 00:07:52.674 "runtime": 1.357085, 00:07:52.674 "iops": 15004.218600898248, 00:07:52.674 "mibps": 1875.527325112281, 00:07:52.674 "io_failed": 1, 00:07:52.674 "io_timeout": 0, 00:07:52.674 "avg_latency_us": 93.61253836749461, 00:07:52.674 "min_latency_us": 24.593886462882097, 00:07:52.674 "max_latency_us": 1402.2986899563318 00:07:52.674 } 00:07:52.674 ], 00:07:52.674 "core_count": 1 00:07:52.674 } 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62106 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62106 ']' 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62106 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62106 00:07:52.674 killing process with pid 62106 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62106' 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62106 00:07:52.674 [2024-10-05 08:44:29.080354] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.674 08:44:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62106 00:07:52.935 [2024-10-05 08:44:29.226583] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.316 08:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uip01sVsn7 00:07:54.316 08:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:54.316 08:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:54.316 08:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:54.316 08:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:54.316 ************************************ 00:07:54.316 END TEST raid_read_error_test 00:07:54.316 ************************************ 00:07:54.316 08:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:54.316 08:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:54.316 08:44:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:54.316 00:07:54.316 real 0m4.525s 00:07:54.316 user 0m5.179s 00:07:54.316 sys 0m0.658s 00:07:54.316 08:44:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.317 08:44:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.317 08:44:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:54.317 08:44:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:54.317 08:44:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.317 08:44:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.317 ************************************ 00:07:54.317 START TEST raid_write_error_test 00:07:54.317 ************************************ 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SI6aRew1SF 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62222 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62222 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62222 ']' 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.317 08:44:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.577 [2024-10-05 08:44:30.798580] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:54.577 [2024-10-05 08:44:30.798807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62222 ] 00:07:54.577 [2024-10-05 08:44:30.975724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.837 [2024-10-05 08:44:31.228741] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.096 [2024-10-05 08:44:31.461489] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.096 [2024-10-05 08:44:31.461527] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.356 BaseBdev1_malloc 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.356 true 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.356 [2024-10-05 08:44:31.694458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:55.356 [2024-10-05 08:44:31.694594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.356 [2024-10-05 08:44:31.694614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:55.356 [2024-10-05 08:44:31.694626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.356 [2024-10-05 08:44:31.696994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.356 [2024-10-05 08:44:31.697031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:55.356 BaseBdev1 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.356 BaseBdev2_malloc 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.356 true 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.356 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.356 [2024-10-05 08:44:31.799100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:55.356 [2024-10-05 08:44:31.799156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.356 [2024-10-05 08:44:31.799172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:55.356 [2024-10-05 08:44:31.799183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.356 [2024-10-05 08:44:31.801499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.357 [2024-10-05 08:44:31.801617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:55.357 BaseBdev2 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.357 [2024-10-05 08:44:31.811158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.357 [2024-10-05 08:44:31.813188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.357 [2024-10-05 08:44:31.813375] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:55.357 [2024-10-05 08:44:31.813389] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:55.357 [2024-10-05 08:44:31.813615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:55.357 [2024-10-05 08:44:31.813791] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:55.357 [2024-10-05 08:44:31.813801] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:55.357 [2024-10-05 08:44:31.813940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.357 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.616 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.617 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.617 "name": "raid_bdev1", 00:07:55.617 "uuid": "2ca38a87-8679-447d-9430-a43963f3ebea", 00:07:55.617 "strip_size_kb": 64, 00:07:55.617 "state": "online", 00:07:55.617 "raid_level": "concat", 00:07:55.617 "superblock": true, 00:07:55.617 "num_base_bdevs": 2, 00:07:55.617 "num_base_bdevs_discovered": 2, 00:07:55.617 "num_base_bdevs_operational": 2, 00:07:55.617 "base_bdevs_list": [ 00:07:55.617 { 00:07:55.617 "name": "BaseBdev1", 00:07:55.617 "uuid": "7d5dfb19-0d0e-5606-ae9a-7b5bce4132ef", 00:07:55.617 "is_configured": true, 00:07:55.617 "data_offset": 2048, 00:07:55.617 "data_size": 63488 00:07:55.617 }, 00:07:55.617 { 00:07:55.617 "name": "BaseBdev2", 00:07:55.617 "uuid": "2316ea55-fd44-5ead-849f-24124f32b01d", 00:07:55.617 "is_configured": true, 00:07:55.617 "data_offset": 2048, 00:07:55.617 "data_size": 63488 00:07:55.617 } 00:07:55.617 ] 00:07:55.617 }' 00:07:55.617 08:44:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.617 08:44:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.876 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:55.876 08:44:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:55.876 [2024-10-05 08:44:32.339734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.817 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.077 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.077 "name": "raid_bdev1", 00:07:57.077 "uuid": "2ca38a87-8679-447d-9430-a43963f3ebea", 00:07:57.077 "strip_size_kb": 64, 00:07:57.077 "state": "online", 00:07:57.077 "raid_level": "concat", 00:07:57.077 "superblock": true, 00:07:57.077 "num_base_bdevs": 2, 00:07:57.077 "num_base_bdevs_discovered": 2, 00:07:57.077 "num_base_bdevs_operational": 2, 00:07:57.077 "base_bdevs_list": [ 00:07:57.077 { 00:07:57.077 "name": "BaseBdev1", 00:07:57.077 "uuid": "7d5dfb19-0d0e-5606-ae9a-7b5bce4132ef", 00:07:57.077 "is_configured": true, 00:07:57.077 "data_offset": 2048, 00:07:57.077 "data_size": 63488 00:07:57.077 }, 00:07:57.077 { 00:07:57.077 "name": "BaseBdev2", 00:07:57.077 "uuid": "2316ea55-fd44-5ead-849f-24124f32b01d", 00:07:57.077 "is_configured": true, 00:07:57.077 "data_offset": 2048, 00:07:57.077 "data_size": 63488 00:07:57.077 } 00:07:57.077 ] 00:07:57.077 }' 00:07:57.077 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.077 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.337 [2024-10-05 08:44:33.695986] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.337 [2024-10-05 08:44:33.696039] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.337 [2024-10-05 08:44:33.698528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.337 [2024-10-05 08:44:33.698652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.337 [2024-10-05 08:44:33.698691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.337 [2024-10-05 08:44:33.698704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:57.337 { 00:07:57.337 "results": [ 00:07:57.337 { 00:07:57.337 "job": "raid_bdev1", 00:07:57.337 "core_mask": "0x1", 00:07:57.337 "workload": "randrw", 00:07:57.337 "percentage": 50, 00:07:57.337 "status": "finished", 00:07:57.337 "queue_depth": 1, 00:07:57.337 "io_size": 131072, 00:07:57.337 "runtime": 1.356666, 00:07:57.337 "iops": 15022.120403990371, 00:07:57.337 "mibps": 1877.7650504987964, 00:07:57.337 "io_failed": 1, 00:07:57.337 "io_timeout": 0, 00:07:57.337 "avg_latency_us": 93.45644768470677, 00:07:57.337 "min_latency_us": 24.482096069868994, 00:07:57.337 "max_latency_us": 1330.7528384279476 00:07:57.337 } 00:07:57.337 ], 00:07:57.337 "core_count": 1 00:07:57.337 } 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62222 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62222 ']' 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62222 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62222 00:07:57.337 killing process with pid 62222 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62222' 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62222 00:07:57.337 [2024-10-05 08:44:33.746537] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.337 08:44:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62222 00:07:57.597 [2024-10-05 08:44:33.895158] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.980 08:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SI6aRew1SF 00:07:58.980 08:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:58.980 08:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:58.980 08:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:58.980 08:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:58.980 08:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.980 08:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:58.980 08:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:58.980 00:07:58.980 real 0m4.606s 00:07:58.980 user 0m5.315s 00:07:58.980 sys 0m0.671s 00:07:58.980 ************************************ 00:07:58.980 END TEST raid_write_error_test 00:07:58.980 ************************************ 00:07:58.980 08:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.980 08:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.980 08:44:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:58.980 08:44:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:58.980 08:44:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:58.980 08:44:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.980 08:44:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.980 ************************************ 00:07:58.980 START TEST raid_state_function_test 00:07:58.980 ************************************ 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:58.980 Process raid pid: 62341 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62341 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62341' 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62341 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62341 ']' 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:58.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:58.980 08:44:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.241 [2024-10-05 08:44:35.465046] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:07:59.241 [2024-10-05 08:44:35.465172] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.241 [2024-10-05 08:44:35.635308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.501 [2024-10-05 08:44:35.884872] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.760 [2024-10-05 08:44:36.120674] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.760 [2024-10-05 08:44:36.120708] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.018 [2024-10-05 08:44:36.297857] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.018 [2024-10-05 08:44:36.297927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.018 [2024-10-05 08:44:36.297938] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.018 [2024-10-05 08:44:36.297949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.018 "name": "Existed_Raid", 00:08:00.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.018 "strip_size_kb": 0, 00:08:00.018 "state": "configuring", 00:08:00.018 "raid_level": "raid1", 00:08:00.018 "superblock": false, 00:08:00.018 "num_base_bdevs": 2, 00:08:00.018 "num_base_bdevs_discovered": 0, 00:08:00.018 "num_base_bdevs_operational": 2, 00:08:00.018 "base_bdevs_list": [ 00:08:00.018 { 00:08:00.018 "name": "BaseBdev1", 00:08:00.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.018 "is_configured": false, 00:08:00.018 "data_offset": 0, 00:08:00.018 "data_size": 0 00:08:00.018 }, 00:08:00.018 { 00:08:00.018 "name": "BaseBdev2", 00:08:00.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.018 "is_configured": false, 00:08:00.018 "data_offset": 0, 00:08:00.018 "data_size": 0 00:08:00.018 } 00:08:00.018 ] 00:08:00.018 }' 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.018 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.587 [2024-10-05 08:44:36.756983] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.587 [2024-10-05 08:44:36.757105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.587 [2024-10-05 08:44:36.768966] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.587 [2024-10-05 08:44:36.769048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.587 [2024-10-05 08:44:36.769074] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.587 [2024-10-05 08:44:36.769099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.587 [2024-10-05 08:44:36.852919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.587 BaseBdev1 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.587 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.587 [ 00:08:00.587 { 00:08:00.587 "name": "BaseBdev1", 00:08:00.587 "aliases": [ 00:08:00.587 "6f3fe261-5592-42ab-b2e8-d612fa679fb9" 00:08:00.587 ], 00:08:00.587 "product_name": "Malloc disk", 00:08:00.587 "block_size": 512, 00:08:00.587 "num_blocks": 65536, 00:08:00.587 "uuid": "6f3fe261-5592-42ab-b2e8-d612fa679fb9", 00:08:00.587 "assigned_rate_limits": { 00:08:00.587 "rw_ios_per_sec": 0, 00:08:00.587 "rw_mbytes_per_sec": 0, 00:08:00.587 "r_mbytes_per_sec": 0, 00:08:00.587 "w_mbytes_per_sec": 0 00:08:00.587 }, 00:08:00.587 "claimed": true, 00:08:00.588 "claim_type": "exclusive_write", 00:08:00.588 "zoned": false, 00:08:00.588 "supported_io_types": { 00:08:00.588 "read": true, 00:08:00.588 "write": true, 00:08:00.588 "unmap": true, 00:08:00.588 "flush": true, 00:08:00.588 "reset": true, 00:08:00.588 "nvme_admin": false, 00:08:00.588 "nvme_io": false, 00:08:00.588 "nvme_io_md": false, 00:08:00.588 "write_zeroes": true, 00:08:00.588 "zcopy": true, 00:08:00.588 "get_zone_info": false, 00:08:00.588 "zone_management": false, 00:08:00.588 "zone_append": false, 00:08:00.588 "compare": false, 00:08:00.588 "compare_and_write": false, 00:08:00.588 "abort": true, 00:08:00.588 "seek_hole": false, 00:08:00.588 "seek_data": false, 00:08:00.588 "copy": true, 00:08:00.588 "nvme_iov_md": false 00:08:00.588 }, 00:08:00.588 "memory_domains": [ 00:08:00.588 { 00:08:00.588 "dma_device_id": "system", 00:08:00.588 "dma_device_type": 1 00:08:00.588 }, 00:08:00.588 { 00:08:00.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.588 "dma_device_type": 2 00:08:00.588 } 00:08:00.588 ], 00:08:00.588 "driver_specific": {} 00:08:00.588 } 00:08:00.588 ] 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.588 "name": "Existed_Raid", 00:08:00.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.588 "strip_size_kb": 0, 00:08:00.588 "state": "configuring", 00:08:00.588 "raid_level": "raid1", 00:08:00.588 "superblock": false, 00:08:00.588 "num_base_bdevs": 2, 00:08:00.588 "num_base_bdevs_discovered": 1, 00:08:00.588 "num_base_bdevs_operational": 2, 00:08:00.588 "base_bdevs_list": [ 00:08:00.588 { 00:08:00.588 "name": "BaseBdev1", 00:08:00.588 "uuid": "6f3fe261-5592-42ab-b2e8-d612fa679fb9", 00:08:00.588 "is_configured": true, 00:08:00.588 "data_offset": 0, 00:08:00.588 "data_size": 65536 00:08:00.588 }, 00:08:00.588 { 00:08:00.588 "name": "BaseBdev2", 00:08:00.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.588 "is_configured": false, 00:08:00.588 "data_offset": 0, 00:08:00.588 "data_size": 0 00:08:00.588 } 00:08:00.588 ] 00:08:00.588 }' 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.588 08:44:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.876 [2024-10-05 08:44:37.316114] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.876 [2024-10-05 08:44:37.316163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.876 [2024-10-05 08:44:37.328129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.876 [2024-10-05 08:44:37.330191] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.876 [2024-10-05 08:44:37.330236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.876 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.135 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.135 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.135 "name": "Existed_Raid", 00:08:01.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.135 "strip_size_kb": 0, 00:08:01.135 "state": "configuring", 00:08:01.135 "raid_level": "raid1", 00:08:01.135 "superblock": false, 00:08:01.135 "num_base_bdevs": 2, 00:08:01.136 "num_base_bdevs_discovered": 1, 00:08:01.136 "num_base_bdevs_operational": 2, 00:08:01.136 "base_bdevs_list": [ 00:08:01.136 { 00:08:01.136 "name": "BaseBdev1", 00:08:01.136 "uuid": "6f3fe261-5592-42ab-b2e8-d612fa679fb9", 00:08:01.136 "is_configured": true, 00:08:01.136 "data_offset": 0, 00:08:01.136 "data_size": 65536 00:08:01.136 }, 00:08:01.136 { 00:08:01.136 "name": "BaseBdev2", 00:08:01.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.136 "is_configured": false, 00:08:01.136 "data_offset": 0, 00:08:01.136 "data_size": 0 00:08:01.136 } 00:08:01.136 ] 00:08:01.136 }' 00:08:01.136 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.136 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.396 [2024-10-05 08:44:37.799829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.396 [2024-10-05 08:44:37.799992] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:01.396 [2024-10-05 08:44:37.800023] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:01.396 [2024-10-05 08:44:37.800380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:01.396 [2024-10-05 08:44:37.800614] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:01.396 [2024-10-05 08:44:37.800661] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:01.396 [2024-10-05 08:44:37.801011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.396 BaseBdev2 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.396 [ 00:08:01.396 { 00:08:01.396 "name": "BaseBdev2", 00:08:01.396 "aliases": [ 00:08:01.396 "33075599-8fc6-4d2e-81af-1db11b679fb8" 00:08:01.396 ], 00:08:01.396 "product_name": "Malloc disk", 00:08:01.396 "block_size": 512, 00:08:01.396 "num_blocks": 65536, 00:08:01.396 "uuid": "33075599-8fc6-4d2e-81af-1db11b679fb8", 00:08:01.396 "assigned_rate_limits": { 00:08:01.396 "rw_ios_per_sec": 0, 00:08:01.396 "rw_mbytes_per_sec": 0, 00:08:01.396 "r_mbytes_per_sec": 0, 00:08:01.396 "w_mbytes_per_sec": 0 00:08:01.396 }, 00:08:01.396 "claimed": true, 00:08:01.396 "claim_type": "exclusive_write", 00:08:01.396 "zoned": false, 00:08:01.396 "supported_io_types": { 00:08:01.396 "read": true, 00:08:01.396 "write": true, 00:08:01.396 "unmap": true, 00:08:01.396 "flush": true, 00:08:01.396 "reset": true, 00:08:01.396 "nvme_admin": false, 00:08:01.396 "nvme_io": false, 00:08:01.396 "nvme_io_md": false, 00:08:01.396 "write_zeroes": true, 00:08:01.396 "zcopy": true, 00:08:01.396 "get_zone_info": false, 00:08:01.396 "zone_management": false, 00:08:01.396 "zone_append": false, 00:08:01.396 "compare": false, 00:08:01.396 "compare_and_write": false, 00:08:01.396 "abort": true, 00:08:01.396 "seek_hole": false, 00:08:01.396 "seek_data": false, 00:08:01.396 "copy": true, 00:08:01.396 "nvme_iov_md": false 00:08:01.396 }, 00:08:01.396 "memory_domains": [ 00:08:01.396 { 00:08:01.396 "dma_device_id": "system", 00:08:01.396 "dma_device_type": 1 00:08:01.396 }, 00:08:01.396 { 00:08:01.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.396 "dma_device_type": 2 00:08:01.396 } 00:08:01.396 ], 00:08:01.396 "driver_specific": {} 00:08:01.396 } 00:08:01.396 ] 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.396 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.397 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.397 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.397 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.397 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.397 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.397 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.397 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.657 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.657 "name": "Existed_Raid", 00:08:01.657 "uuid": "5bab31d9-a0b7-404f-bda1-42af1c487fe6", 00:08:01.657 "strip_size_kb": 0, 00:08:01.657 "state": "online", 00:08:01.657 "raid_level": "raid1", 00:08:01.657 "superblock": false, 00:08:01.657 "num_base_bdevs": 2, 00:08:01.657 "num_base_bdevs_discovered": 2, 00:08:01.657 "num_base_bdevs_operational": 2, 00:08:01.657 "base_bdevs_list": [ 00:08:01.657 { 00:08:01.657 "name": "BaseBdev1", 00:08:01.657 "uuid": "6f3fe261-5592-42ab-b2e8-d612fa679fb9", 00:08:01.657 "is_configured": true, 00:08:01.657 "data_offset": 0, 00:08:01.657 "data_size": 65536 00:08:01.657 }, 00:08:01.657 { 00:08:01.657 "name": "BaseBdev2", 00:08:01.657 "uuid": "33075599-8fc6-4d2e-81af-1db11b679fb8", 00:08:01.657 "is_configured": true, 00:08:01.657 "data_offset": 0, 00:08:01.657 "data_size": 65536 00:08:01.657 } 00:08:01.657 ] 00:08:01.657 }' 00:08:01.657 08:44:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.657 08:44:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:01.918 [2024-10-05 08:44:38.287323] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:01.918 "name": "Existed_Raid", 00:08:01.918 "aliases": [ 00:08:01.918 "5bab31d9-a0b7-404f-bda1-42af1c487fe6" 00:08:01.918 ], 00:08:01.918 "product_name": "Raid Volume", 00:08:01.918 "block_size": 512, 00:08:01.918 "num_blocks": 65536, 00:08:01.918 "uuid": "5bab31d9-a0b7-404f-bda1-42af1c487fe6", 00:08:01.918 "assigned_rate_limits": { 00:08:01.918 "rw_ios_per_sec": 0, 00:08:01.918 "rw_mbytes_per_sec": 0, 00:08:01.918 "r_mbytes_per_sec": 0, 00:08:01.918 "w_mbytes_per_sec": 0 00:08:01.918 }, 00:08:01.918 "claimed": false, 00:08:01.918 "zoned": false, 00:08:01.918 "supported_io_types": { 00:08:01.918 "read": true, 00:08:01.918 "write": true, 00:08:01.918 "unmap": false, 00:08:01.918 "flush": false, 00:08:01.918 "reset": true, 00:08:01.918 "nvme_admin": false, 00:08:01.918 "nvme_io": false, 00:08:01.918 "nvme_io_md": false, 00:08:01.918 "write_zeroes": true, 00:08:01.918 "zcopy": false, 00:08:01.918 "get_zone_info": false, 00:08:01.918 "zone_management": false, 00:08:01.918 "zone_append": false, 00:08:01.918 "compare": false, 00:08:01.918 "compare_and_write": false, 00:08:01.918 "abort": false, 00:08:01.918 "seek_hole": false, 00:08:01.918 "seek_data": false, 00:08:01.918 "copy": false, 00:08:01.918 "nvme_iov_md": false 00:08:01.918 }, 00:08:01.918 "memory_domains": [ 00:08:01.918 { 00:08:01.918 "dma_device_id": "system", 00:08:01.918 "dma_device_type": 1 00:08:01.918 }, 00:08:01.918 { 00:08:01.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.918 "dma_device_type": 2 00:08:01.918 }, 00:08:01.918 { 00:08:01.918 "dma_device_id": "system", 00:08:01.918 "dma_device_type": 1 00:08:01.918 }, 00:08:01.918 { 00:08:01.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.918 "dma_device_type": 2 00:08:01.918 } 00:08:01.918 ], 00:08:01.918 "driver_specific": { 00:08:01.918 "raid": { 00:08:01.918 "uuid": "5bab31d9-a0b7-404f-bda1-42af1c487fe6", 00:08:01.918 "strip_size_kb": 0, 00:08:01.918 "state": "online", 00:08:01.918 "raid_level": "raid1", 00:08:01.918 "superblock": false, 00:08:01.918 "num_base_bdevs": 2, 00:08:01.918 "num_base_bdevs_discovered": 2, 00:08:01.918 "num_base_bdevs_operational": 2, 00:08:01.918 "base_bdevs_list": [ 00:08:01.918 { 00:08:01.918 "name": "BaseBdev1", 00:08:01.918 "uuid": "6f3fe261-5592-42ab-b2e8-d612fa679fb9", 00:08:01.918 "is_configured": true, 00:08:01.918 "data_offset": 0, 00:08:01.918 "data_size": 65536 00:08:01.918 }, 00:08:01.918 { 00:08:01.918 "name": "BaseBdev2", 00:08:01.918 "uuid": "33075599-8fc6-4d2e-81af-1db11b679fb8", 00:08:01.918 "is_configured": true, 00:08:01.918 "data_offset": 0, 00:08:01.918 "data_size": 65536 00:08:01.918 } 00:08:01.918 ] 00:08:01.918 } 00:08:01.918 } 00:08:01.918 }' 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:01.918 BaseBdev2' 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.918 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.178 [2024-10-05 08:44:38.482727] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.178 "name": "Existed_Raid", 00:08:02.178 "uuid": "5bab31d9-a0b7-404f-bda1-42af1c487fe6", 00:08:02.178 "strip_size_kb": 0, 00:08:02.178 "state": "online", 00:08:02.178 "raid_level": "raid1", 00:08:02.178 "superblock": false, 00:08:02.178 "num_base_bdevs": 2, 00:08:02.178 "num_base_bdevs_discovered": 1, 00:08:02.178 "num_base_bdevs_operational": 1, 00:08:02.178 "base_bdevs_list": [ 00:08:02.178 { 00:08:02.178 "name": null, 00:08:02.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.178 "is_configured": false, 00:08:02.178 "data_offset": 0, 00:08:02.178 "data_size": 65536 00:08:02.178 }, 00:08:02.178 { 00:08:02.178 "name": "BaseBdev2", 00:08:02.178 "uuid": "33075599-8fc6-4d2e-81af-1db11b679fb8", 00:08:02.178 "is_configured": true, 00:08:02.178 "data_offset": 0, 00:08:02.178 "data_size": 65536 00:08:02.178 } 00:08:02.178 ] 00:08:02.178 }' 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.178 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.749 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:02.749 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:02.749 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:02.749 08:44:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.749 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.749 08:44:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.749 [2024-10-05 08:44:39.024128] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:02.749 [2024-10-05 08:44:39.024291] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.749 [2024-10-05 08:44:39.127851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.749 [2024-10-05 08:44:39.127994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.749 [2024-10-05 08:44:39.128014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62341 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62341 ']' 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62341 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62341 00:08:02.749 killing process with pid 62341 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62341' 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62341 00:08:02.749 [2024-10-05 08:44:39.214037] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.749 08:44:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62341 00:08:03.009 [2024-10-05 08:44:39.232252] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.390 ************************************ 00:08:04.390 END TEST raid_state_function_test 00:08:04.390 ************************************ 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:04.390 00:08:04.390 real 0m5.222s 00:08:04.390 user 0m7.208s 00:08:04.390 sys 0m0.949s 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.390 08:44:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:04.390 08:44:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:04.390 08:44:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.390 08:44:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.390 ************************************ 00:08:04.390 START TEST raid_state_function_test_sb 00:08:04.390 ************************************ 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:04.390 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:04.391 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62564 00:08:04.391 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.391 Process raid pid: 62564 00:08:04.391 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62564' 00:08:04.391 08:44:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62564 00:08:04.391 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62564 ']' 00:08:04.391 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.391 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.391 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.391 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.391 08:44:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.391 [2024-10-05 08:44:40.752002] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:04.391 [2024-10-05 08:44:40.752109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.651 [2024-10-05 08:44:40.918170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.910 [2024-10-05 08:44:41.172106] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.171 [2024-10-05 08:44:41.412295] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.171 [2024-10-05 08:44:41.412328] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.171 [2024-10-05 08:44:41.592011] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.171 [2024-10-05 08:44:41.592069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.171 [2024-10-05 08:44:41.592079] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.171 [2024-10-05 08:44:41.592091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.171 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.431 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.431 "name": "Existed_Raid", 00:08:05.431 "uuid": "c07756f1-03ed-40fa-8e1c-ce6e6d0cdbea", 00:08:05.431 "strip_size_kb": 0, 00:08:05.431 "state": "configuring", 00:08:05.431 "raid_level": "raid1", 00:08:05.431 "superblock": true, 00:08:05.431 "num_base_bdevs": 2, 00:08:05.431 "num_base_bdevs_discovered": 0, 00:08:05.431 "num_base_bdevs_operational": 2, 00:08:05.431 "base_bdevs_list": [ 00:08:05.431 { 00:08:05.431 "name": "BaseBdev1", 00:08:05.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.431 "is_configured": false, 00:08:05.431 "data_offset": 0, 00:08:05.431 "data_size": 0 00:08:05.431 }, 00:08:05.431 { 00:08:05.431 "name": "BaseBdev2", 00:08:05.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.431 "is_configured": false, 00:08:05.431 "data_offset": 0, 00:08:05.431 "data_size": 0 00:08:05.431 } 00:08:05.431 ] 00:08:05.431 }' 00:08:05.431 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.431 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.692 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.692 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.692 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.692 [2024-10-05 08:44:41.983212] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.692 [2024-10-05 08:44:41.983311] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:05.692 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.692 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.692 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.692 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.692 [2024-10-05 08:44:41.995228] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.692 [2024-10-05 08:44:41.995307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.692 [2024-10-05 08:44:41.995333] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.692 [2024-10-05 08:44:41.995360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.692 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.692 08:44:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.692 08:44:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.692 [2024-10-05 08:44:42.057704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.692 BaseBdev1 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.692 [ 00:08:05.692 { 00:08:05.692 "name": "BaseBdev1", 00:08:05.692 "aliases": [ 00:08:05.692 "25bff57a-c356-4f88-a9a8-3ae31d8d3ff5" 00:08:05.692 ], 00:08:05.692 "product_name": "Malloc disk", 00:08:05.692 "block_size": 512, 00:08:05.692 "num_blocks": 65536, 00:08:05.692 "uuid": "25bff57a-c356-4f88-a9a8-3ae31d8d3ff5", 00:08:05.692 "assigned_rate_limits": { 00:08:05.692 "rw_ios_per_sec": 0, 00:08:05.692 "rw_mbytes_per_sec": 0, 00:08:05.692 "r_mbytes_per_sec": 0, 00:08:05.692 "w_mbytes_per_sec": 0 00:08:05.692 }, 00:08:05.692 "claimed": true, 00:08:05.692 "claim_type": "exclusive_write", 00:08:05.692 "zoned": false, 00:08:05.692 "supported_io_types": { 00:08:05.692 "read": true, 00:08:05.692 "write": true, 00:08:05.692 "unmap": true, 00:08:05.692 "flush": true, 00:08:05.692 "reset": true, 00:08:05.692 "nvme_admin": false, 00:08:05.692 "nvme_io": false, 00:08:05.692 "nvme_io_md": false, 00:08:05.692 "write_zeroes": true, 00:08:05.692 "zcopy": true, 00:08:05.692 "get_zone_info": false, 00:08:05.692 "zone_management": false, 00:08:05.692 "zone_append": false, 00:08:05.692 "compare": false, 00:08:05.692 "compare_and_write": false, 00:08:05.692 "abort": true, 00:08:05.692 "seek_hole": false, 00:08:05.692 "seek_data": false, 00:08:05.692 "copy": true, 00:08:05.692 "nvme_iov_md": false 00:08:05.692 }, 00:08:05.692 "memory_domains": [ 00:08:05.692 { 00:08:05.692 "dma_device_id": "system", 00:08:05.692 "dma_device_type": 1 00:08:05.692 }, 00:08:05.692 { 00:08:05.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.692 "dma_device_type": 2 00:08:05.692 } 00:08:05.692 ], 00:08:05.692 "driver_specific": {} 00:08:05.692 } 00:08:05.692 ] 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.692 "name": "Existed_Raid", 00:08:05.692 "uuid": "b88e8c55-f236-41a2-a0cb-99b1789b0112", 00:08:05.692 "strip_size_kb": 0, 00:08:05.692 "state": "configuring", 00:08:05.692 "raid_level": "raid1", 00:08:05.692 "superblock": true, 00:08:05.692 "num_base_bdevs": 2, 00:08:05.692 "num_base_bdevs_discovered": 1, 00:08:05.692 "num_base_bdevs_operational": 2, 00:08:05.692 "base_bdevs_list": [ 00:08:05.692 { 00:08:05.692 "name": "BaseBdev1", 00:08:05.692 "uuid": "25bff57a-c356-4f88-a9a8-3ae31d8d3ff5", 00:08:05.692 "is_configured": true, 00:08:05.692 "data_offset": 2048, 00:08:05.692 "data_size": 63488 00:08:05.692 }, 00:08:05.692 { 00:08:05.692 "name": "BaseBdev2", 00:08:05.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.692 "is_configured": false, 00:08:05.692 "data_offset": 0, 00:08:05.692 "data_size": 0 00:08:05.692 } 00:08:05.692 ] 00:08:05.692 }' 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.692 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.263 [2024-10-05 08:44:42.532921] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.263 [2024-10-05 08:44:42.532976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.263 [2024-10-05 08:44:42.544961] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.263 [2024-10-05 08:44:42.547022] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.263 [2024-10-05 08:44:42.547097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.263 "name": "Existed_Raid", 00:08:06.263 "uuid": "840f9e2f-91b6-4a88-a852-91bcae4499d7", 00:08:06.263 "strip_size_kb": 0, 00:08:06.263 "state": "configuring", 00:08:06.263 "raid_level": "raid1", 00:08:06.263 "superblock": true, 00:08:06.263 "num_base_bdevs": 2, 00:08:06.263 "num_base_bdevs_discovered": 1, 00:08:06.263 "num_base_bdevs_operational": 2, 00:08:06.263 "base_bdevs_list": [ 00:08:06.263 { 00:08:06.263 "name": "BaseBdev1", 00:08:06.263 "uuid": "25bff57a-c356-4f88-a9a8-3ae31d8d3ff5", 00:08:06.263 "is_configured": true, 00:08:06.263 "data_offset": 2048, 00:08:06.263 "data_size": 63488 00:08:06.263 }, 00:08:06.263 { 00:08:06.263 "name": "BaseBdev2", 00:08:06.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.263 "is_configured": false, 00:08:06.263 "data_offset": 0, 00:08:06.263 "data_size": 0 00:08:06.263 } 00:08:06.263 ] 00:08:06.263 }' 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.263 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.523 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:06.523 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.523 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.523 [2024-10-05 08:44:42.992000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.523 [2024-10-05 08:44:42.992266] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:06.523 [2024-10-05 08:44:42.992288] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:06.523 [2024-10-05 08:44:42.992593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:06.523 [2024-10-05 08:44:42.992758] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:06.523 [2024-10-05 08:44:42.992772] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:06.523 BaseBdev2 00:08:06.523 [2024-10-05 08:44:42.992933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.784 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.784 08:44:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:06.784 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:06.784 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.784 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:06.784 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.784 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.784 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:06.784 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.784 08:44:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.784 [ 00:08:06.784 { 00:08:06.784 "name": "BaseBdev2", 00:08:06.784 "aliases": [ 00:08:06.784 "fa8da70d-4240-44f2-adee-07ff0bf3937f" 00:08:06.784 ], 00:08:06.784 "product_name": "Malloc disk", 00:08:06.784 "block_size": 512, 00:08:06.784 "num_blocks": 65536, 00:08:06.784 "uuid": "fa8da70d-4240-44f2-adee-07ff0bf3937f", 00:08:06.784 "assigned_rate_limits": { 00:08:06.784 "rw_ios_per_sec": 0, 00:08:06.784 "rw_mbytes_per_sec": 0, 00:08:06.784 "r_mbytes_per_sec": 0, 00:08:06.784 "w_mbytes_per_sec": 0 00:08:06.784 }, 00:08:06.784 "claimed": true, 00:08:06.784 "claim_type": "exclusive_write", 00:08:06.784 "zoned": false, 00:08:06.784 "supported_io_types": { 00:08:06.784 "read": true, 00:08:06.784 "write": true, 00:08:06.784 "unmap": true, 00:08:06.784 "flush": true, 00:08:06.784 "reset": true, 00:08:06.784 "nvme_admin": false, 00:08:06.784 "nvme_io": false, 00:08:06.784 "nvme_io_md": false, 00:08:06.784 "write_zeroes": true, 00:08:06.784 "zcopy": true, 00:08:06.784 "get_zone_info": false, 00:08:06.784 "zone_management": false, 00:08:06.784 "zone_append": false, 00:08:06.784 "compare": false, 00:08:06.784 "compare_and_write": false, 00:08:06.784 "abort": true, 00:08:06.784 "seek_hole": false, 00:08:06.784 "seek_data": false, 00:08:06.784 "copy": true, 00:08:06.784 "nvme_iov_md": false 00:08:06.784 }, 00:08:06.784 "memory_domains": [ 00:08:06.784 { 00:08:06.784 "dma_device_id": "system", 00:08:06.784 "dma_device_type": 1 00:08:06.784 }, 00:08:06.784 { 00:08:06.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.784 "dma_device_type": 2 00:08:06.784 } 00:08:06.784 ], 00:08:06.784 "driver_specific": {} 00:08:06.784 } 00:08:06.784 ] 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.784 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.784 "name": "Existed_Raid", 00:08:06.784 "uuid": "840f9e2f-91b6-4a88-a852-91bcae4499d7", 00:08:06.784 "strip_size_kb": 0, 00:08:06.784 "state": "online", 00:08:06.784 "raid_level": "raid1", 00:08:06.784 "superblock": true, 00:08:06.784 "num_base_bdevs": 2, 00:08:06.784 "num_base_bdevs_discovered": 2, 00:08:06.784 "num_base_bdevs_operational": 2, 00:08:06.784 "base_bdevs_list": [ 00:08:06.784 { 00:08:06.785 "name": "BaseBdev1", 00:08:06.785 "uuid": "25bff57a-c356-4f88-a9a8-3ae31d8d3ff5", 00:08:06.785 "is_configured": true, 00:08:06.785 "data_offset": 2048, 00:08:06.785 "data_size": 63488 00:08:06.785 }, 00:08:06.785 { 00:08:06.785 "name": "BaseBdev2", 00:08:06.785 "uuid": "fa8da70d-4240-44f2-adee-07ff0bf3937f", 00:08:06.785 "is_configured": true, 00:08:06.785 "data_offset": 2048, 00:08:06.785 "data_size": 63488 00:08:06.785 } 00:08:06.785 ] 00:08:06.785 }' 00:08:06.785 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.785 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.044 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:07.044 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:07.044 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:07.044 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:07.044 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:07.044 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:07.044 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:07.044 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.044 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.044 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:07.044 [2024-10-05 08:44:43.511372] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:07.305 "name": "Existed_Raid", 00:08:07.305 "aliases": [ 00:08:07.305 "840f9e2f-91b6-4a88-a852-91bcae4499d7" 00:08:07.305 ], 00:08:07.305 "product_name": "Raid Volume", 00:08:07.305 "block_size": 512, 00:08:07.305 "num_blocks": 63488, 00:08:07.305 "uuid": "840f9e2f-91b6-4a88-a852-91bcae4499d7", 00:08:07.305 "assigned_rate_limits": { 00:08:07.305 "rw_ios_per_sec": 0, 00:08:07.305 "rw_mbytes_per_sec": 0, 00:08:07.305 "r_mbytes_per_sec": 0, 00:08:07.305 "w_mbytes_per_sec": 0 00:08:07.305 }, 00:08:07.305 "claimed": false, 00:08:07.305 "zoned": false, 00:08:07.305 "supported_io_types": { 00:08:07.305 "read": true, 00:08:07.305 "write": true, 00:08:07.305 "unmap": false, 00:08:07.305 "flush": false, 00:08:07.305 "reset": true, 00:08:07.305 "nvme_admin": false, 00:08:07.305 "nvme_io": false, 00:08:07.305 "nvme_io_md": false, 00:08:07.305 "write_zeroes": true, 00:08:07.305 "zcopy": false, 00:08:07.305 "get_zone_info": false, 00:08:07.305 "zone_management": false, 00:08:07.305 "zone_append": false, 00:08:07.305 "compare": false, 00:08:07.305 "compare_and_write": false, 00:08:07.305 "abort": false, 00:08:07.305 "seek_hole": false, 00:08:07.305 "seek_data": false, 00:08:07.305 "copy": false, 00:08:07.305 "nvme_iov_md": false 00:08:07.305 }, 00:08:07.305 "memory_domains": [ 00:08:07.305 { 00:08:07.305 "dma_device_id": "system", 00:08:07.305 "dma_device_type": 1 00:08:07.305 }, 00:08:07.305 { 00:08:07.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.305 "dma_device_type": 2 00:08:07.305 }, 00:08:07.305 { 00:08:07.305 "dma_device_id": "system", 00:08:07.305 "dma_device_type": 1 00:08:07.305 }, 00:08:07.305 { 00:08:07.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.305 "dma_device_type": 2 00:08:07.305 } 00:08:07.305 ], 00:08:07.305 "driver_specific": { 00:08:07.305 "raid": { 00:08:07.305 "uuid": "840f9e2f-91b6-4a88-a852-91bcae4499d7", 00:08:07.305 "strip_size_kb": 0, 00:08:07.305 "state": "online", 00:08:07.305 "raid_level": "raid1", 00:08:07.305 "superblock": true, 00:08:07.305 "num_base_bdevs": 2, 00:08:07.305 "num_base_bdevs_discovered": 2, 00:08:07.305 "num_base_bdevs_operational": 2, 00:08:07.305 "base_bdevs_list": [ 00:08:07.305 { 00:08:07.305 "name": "BaseBdev1", 00:08:07.305 "uuid": "25bff57a-c356-4f88-a9a8-3ae31d8d3ff5", 00:08:07.305 "is_configured": true, 00:08:07.305 "data_offset": 2048, 00:08:07.305 "data_size": 63488 00:08:07.305 }, 00:08:07.305 { 00:08:07.305 "name": "BaseBdev2", 00:08:07.305 "uuid": "fa8da70d-4240-44f2-adee-07ff0bf3937f", 00:08:07.305 "is_configured": true, 00:08:07.305 "data_offset": 2048, 00:08:07.305 "data_size": 63488 00:08:07.305 } 00:08:07.305 ] 00:08:07.305 } 00:08:07.305 } 00:08:07.305 }' 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:07.305 BaseBdev2' 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.305 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.305 [2024-10-05 08:44:43.722818] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.565 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.565 "name": "Existed_Raid", 00:08:07.565 "uuid": "840f9e2f-91b6-4a88-a852-91bcae4499d7", 00:08:07.565 "strip_size_kb": 0, 00:08:07.565 "state": "online", 00:08:07.565 "raid_level": "raid1", 00:08:07.565 "superblock": true, 00:08:07.565 "num_base_bdevs": 2, 00:08:07.565 "num_base_bdevs_discovered": 1, 00:08:07.565 "num_base_bdevs_operational": 1, 00:08:07.566 "base_bdevs_list": [ 00:08:07.566 { 00:08:07.566 "name": null, 00:08:07.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.566 "is_configured": false, 00:08:07.566 "data_offset": 0, 00:08:07.566 "data_size": 63488 00:08:07.566 }, 00:08:07.566 { 00:08:07.566 "name": "BaseBdev2", 00:08:07.566 "uuid": "fa8da70d-4240-44f2-adee-07ff0bf3937f", 00:08:07.566 "is_configured": true, 00:08:07.566 "data_offset": 2048, 00:08:07.566 "data_size": 63488 00:08:07.566 } 00:08:07.566 ] 00:08:07.566 }' 00:08:07.566 08:44:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.566 08:44:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.826 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:07.826 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.826 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.826 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:07.826 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.826 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.826 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.086 [2024-10-05 08:44:44.309214] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.086 [2024-10-05 08:44:44.309381] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.086 [2024-10-05 08:44:44.412473] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.086 [2024-10-05 08:44:44.412527] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.086 [2024-10-05 08:44:44.412540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62564 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62564 ']' 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62564 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62564 00:08:08.086 killing process with pid 62564 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62564' 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62564 00:08:08.086 [2024-10-05 08:44:44.510923] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.086 08:44:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62564 00:08:08.086 [2024-10-05 08:44:44.528534] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.491 08:44:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:09.491 00:08:09.491 real 0m5.233s 00:08:09.491 user 0m7.263s 00:08:09.491 sys 0m0.931s 00:08:09.491 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.491 08:44:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.491 ************************************ 00:08:09.491 END TEST raid_state_function_test_sb 00:08:09.491 ************************************ 00:08:09.491 08:44:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:09.491 08:44:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:09.491 08:44:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.491 08:44:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.491 ************************************ 00:08:09.491 START TEST raid_superblock_test 00:08:09.491 ************************************ 00:08:09.491 08:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:09.491 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:09.491 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:09.491 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:09.491 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:09.491 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:09.491 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:09.491 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62786 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62786 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62786 ']' 00:08:09.492 08:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.752 08:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.752 08:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.752 08:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.752 08:44:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.752 [2024-10-05 08:44:46.040780] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:09.752 [2024-10-05 08:44:46.041010] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62786 ] 00:08:09.752 [2024-10-05 08:44:46.205807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.012 [2024-10-05 08:44:46.464123] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.272 [2024-10-05 08:44:46.697380] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.272 [2024-10-05 08:44:46.697519] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.532 malloc1 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.532 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.532 [2024-10-05 08:44:46.917543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:10.533 [2024-10-05 08:44:46.917651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.533 [2024-10-05 08:44:46.917695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:10.533 [2024-10-05 08:44:46.917727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.533 [2024-10-05 08:44:46.920074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.533 [2024-10-05 08:44:46.920139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:10.533 pt1 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.533 malloc2 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.533 [2024-10-05 08:44:46.993181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:10.533 [2024-10-05 08:44:46.993270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.533 [2024-10-05 08:44:46.993296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:10.533 [2024-10-05 08:44:46.993305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.533 [2024-10-05 08:44:46.995618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.533 [2024-10-05 08:44:46.995652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:10.533 pt2 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.533 08:44:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.793 [2024-10-05 08:44:47.005238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:10.793 [2024-10-05 08:44:47.007297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:10.793 [2024-10-05 08:44:47.007455] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:10.793 [2024-10-05 08:44:47.007468] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:10.793 [2024-10-05 08:44:47.007695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:10.793 [2024-10-05 08:44:47.007855] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:10.793 [2024-10-05 08:44:47.007868] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:10.793 [2024-10-05 08:44:47.008011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.793 "name": "raid_bdev1", 00:08:10.793 "uuid": "f1d535ce-d259-4b91-8c97-ac657a765f38", 00:08:10.793 "strip_size_kb": 0, 00:08:10.793 "state": "online", 00:08:10.793 "raid_level": "raid1", 00:08:10.793 "superblock": true, 00:08:10.793 "num_base_bdevs": 2, 00:08:10.793 "num_base_bdevs_discovered": 2, 00:08:10.793 "num_base_bdevs_operational": 2, 00:08:10.793 "base_bdevs_list": [ 00:08:10.793 { 00:08:10.793 "name": "pt1", 00:08:10.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:10.793 "is_configured": true, 00:08:10.793 "data_offset": 2048, 00:08:10.793 "data_size": 63488 00:08:10.793 }, 00:08:10.793 { 00:08:10.793 "name": "pt2", 00:08:10.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.793 "is_configured": true, 00:08:10.793 "data_offset": 2048, 00:08:10.793 "data_size": 63488 00:08:10.793 } 00:08:10.793 ] 00:08:10.793 }' 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.793 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.052 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:11.052 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:11.052 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.052 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.052 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.052 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.052 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.052 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.052 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.052 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.052 [2024-10-05 08:44:47.440813] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.052 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.053 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.053 "name": "raid_bdev1", 00:08:11.053 "aliases": [ 00:08:11.053 "f1d535ce-d259-4b91-8c97-ac657a765f38" 00:08:11.053 ], 00:08:11.053 "product_name": "Raid Volume", 00:08:11.053 "block_size": 512, 00:08:11.053 "num_blocks": 63488, 00:08:11.053 "uuid": "f1d535ce-d259-4b91-8c97-ac657a765f38", 00:08:11.053 "assigned_rate_limits": { 00:08:11.053 "rw_ios_per_sec": 0, 00:08:11.053 "rw_mbytes_per_sec": 0, 00:08:11.053 "r_mbytes_per_sec": 0, 00:08:11.053 "w_mbytes_per_sec": 0 00:08:11.053 }, 00:08:11.053 "claimed": false, 00:08:11.053 "zoned": false, 00:08:11.053 "supported_io_types": { 00:08:11.053 "read": true, 00:08:11.053 "write": true, 00:08:11.053 "unmap": false, 00:08:11.053 "flush": false, 00:08:11.053 "reset": true, 00:08:11.053 "nvme_admin": false, 00:08:11.053 "nvme_io": false, 00:08:11.053 "nvme_io_md": false, 00:08:11.053 "write_zeroes": true, 00:08:11.053 "zcopy": false, 00:08:11.053 "get_zone_info": false, 00:08:11.053 "zone_management": false, 00:08:11.053 "zone_append": false, 00:08:11.053 "compare": false, 00:08:11.053 "compare_and_write": false, 00:08:11.053 "abort": false, 00:08:11.053 "seek_hole": false, 00:08:11.053 "seek_data": false, 00:08:11.053 "copy": false, 00:08:11.053 "nvme_iov_md": false 00:08:11.053 }, 00:08:11.053 "memory_domains": [ 00:08:11.053 { 00:08:11.053 "dma_device_id": "system", 00:08:11.053 "dma_device_type": 1 00:08:11.053 }, 00:08:11.053 { 00:08:11.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.053 "dma_device_type": 2 00:08:11.053 }, 00:08:11.053 { 00:08:11.053 "dma_device_id": "system", 00:08:11.053 "dma_device_type": 1 00:08:11.053 }, 00:08:11.053 { 00:08:11.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.053 "dma_device_type": 2 00:08:11.053 } 00:08:11.053 ], 00:08:11.053 "driver_specific": { 00:08:11.053 "raid": { 00:08:11.053 "uuid": "f1d535ce-d259-4b91-8c97-ac657a765f38", 00:08:11.053 "strip_size_kb": 0, 00:08:11.053 "state": "online", 00:08:11.053 "raid_level": "raid1", 00:08:11.053 "superblock": true, 00:08:11.053 "num_base_bdevs": 2, 00:08:11.053 "num_base_bdevs_discovered": 2, 00:08:11.053 "num_base_bdevs_operational": 2, 00:08:11.053 "base_bdevs_list": [ 00:08:11.053 { 00:08:11.053 "name": "pt1", 00:08:11.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.053 "is_configured": true, 00:08:11.053 "data_offset": 2048, 00:08:11.053 "data_size": 63488 00:08:11.053 }, 00:08:11.053 { 00:08:11.053 "name": "pt2", 00:08:11.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.053 "is_configured": true, 00:08:11.053 "data_offset": 2048, 00:08:11.053 "data_size": 63488 00:08:11.053 } 00:08:11.053 ] 00:08:11.053 } 00:08:11.053 } 00:08:11.053 }' 00:08:11.053 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:11.313 pt2' 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.313 [2024-10-05 08:44:47.680287] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f1d535ce-d259-4b91-8c97-ac657a765f38 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f1d535ce-d259-4b91-8c97-ac657a765f38 ']' 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.313 [2024-10-05 08:44:47.723972] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:11.313 [2024-10-05 08:44:47.723996] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.313 [2024-10-05 08:44:47.724085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.313 [2024-10-05 08:44:47.724151] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.313 [2024-10-05 08:44:47.724164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.313 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.574 [2024-10-05 08:44:47.867725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:11.574 [2024-10-05 08:44:47.869808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:11.574 [2024-10-05 08:44:47.869917] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:11.574 [2024-10-05 08:44:47.870017] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:11.574 [2024-10-05 08:44:47.870068] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:11.574 [2024-10-05 08:44:47.870113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:11.574 request: 00:08:11.574 { 00:08:11.574 "name": "raid_bdev1", 00:08:11.574 "raid_level": "raid1", 00:08:11.574 "base_bdevs": [ 00:08:11.574 "malloc1", 00:08:11.574 "malloc2" 00:08:11.574 ], 00:08:11.574 "superblock": false, 00:08:11.574 "method": "bdev_raid_create", 00:08:11.574 "req_id": 1 00:08:11.574 } 00:08:11.574 Got JSON-RPC error response 00:08:11.574 response: 00:08:11.574 { 00:08:11.574 "code": -17, 00:08:11.574 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:11.574 } 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:11.574 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.575 [2024-10-05 08:44:47.935581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:11.575 [2024-10-05 08:44:47.935663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.575 [2024-10-05 08:44:47.935682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:11.575 [2024-10-05 08:44:47.935701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.575 [2024-10-05 08:44:47.938108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.575 [2024-10-05 08:44:47.938143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:11.575 [2024-10-05 08:44:47.938204] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:11.575 [2024-10-05 08:44:47.938258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:11.575 pt1 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.575 "name": "raid_bdev1", 00:08:11.575 "uuid": "f1d535ce-d259-4b91-8c97-ac657a765f38", 00:08:11.575 "strip_size_kb": 0, 00:08:11.575 "state": "configuring", 00:08:11.575 "raid_level": "raid1", 00:08:11.575 "superblock": true, 00:08:11.575 "num_base_bdevs": 2, 00:08:11.575 "num_base_bdevs_discovered": 1, 00:08:11.575 "num_base_bdevs_operational": 2, 00:08:11.575 "base_bdevs_list": [ 00:08:11.575 { 00:08:11.575 "name": "pt1", 00:08:11.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.575 "is_configured": true, 00:08:11.575 "data_offset": 2048, 00:08:11.575 "data_size": 63488 00:08:11.575 }, 00:08:11.575 { 00:08:11.575 "name": null, 00:08:11.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.575 "is_configured": false, 00:08:11.575 "data_offset": 2048, 00:08:11.575 "data_size": 63488 00:08:11.575 } 00:08:11.575 ] 00:08:11.575 }' 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.575 08:44:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.143 [2024-10-05 08:44:48.378820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:12.143 [2024-10-05 08:44:48.378926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.143 [2024-10-05 08:44:48.378951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:12.143 [2024-10-05 08:44:48.378973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.143 [2024-10-05 08:44:48.379428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.143 [2024-10-05 08:44:48.379448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:12.143 [2024-10-05 08:44:48.379522] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:12.143 [2024-10-05 08:44:48.379545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:12.143 [2024-10-05 08:44:48.379668] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.143 [2024-10-05 08:44:48.379681] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:12.143 [2024-10-05 08:44:48.379934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:12.143 [2024-10-05 08:44:48.380100] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.143 [2024-10-05 08:44:48.380116] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:12.143 [2024-10-05 08:44:48.380261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.143 pt2 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.143 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.144 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.144 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.144 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.144 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.144 "name": "raid_bdev1", 00:08:12.144 "uuid": "f1d535ce-d259-4b91-8c97-ac657a765f38", 00:08:12.144 "strip_size_kb": 0, 00:08:12.144 "state": "online", 00:08:12.144 "raid_level": "raid1", 00:08:12.144 "superblock": true, 00:08:12.144 "num_base_bdevs": 2, 00:08:12.144 "num_base_bdevs_discovered": 2, 00:08:12.144 "num_base_bdevs_operational": 2, 00:08:12.144 "base_bdevs_list": [ 00:08:12.144 { 00:08:12.144 "name": "pt1", 00:08:12.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.144 "is_configured": true, 00:08:12.144 "data_offset": 2048, 00:08:12.144 "data_size": 63488 00:08:12.144 }, 00:08:12.144 { 00:08:12.144 "name": "pt2", 00:08:12.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.144 "is_configured": true, 00:08:12.144 "data_offset": 2048, 00:08:12.144 "data_size": 63488 00:08:12.144 } 00:08:12.144 ] 00:08:12.144 }' 00:08:12.144 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.144 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.403 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:12.403 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:12.403 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:12.403 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:12.403 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:12.403 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:12.403 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:12.403 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.403 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.403 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:12.403 [2024-10-05 08:44:48.830293] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.403 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.663 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.663 "name": "raid_bdev1", 00:08:12.663 "aliases": [ 00:08:12.663 "f1d535ce-d259-4b91-8c97-ac657a765f38" 00:08:12.663 ], 00:08:12.663 "product_name": "Raid Volume", 00:08:12.663 "block_size": 512, 00:08:12.663 "num_blocks": 63488, 00:08:12.663 "uuid": "f1d535ce-d259-4b91-8c97-ac657a765f38", 00:08:12.663 "assigned_rate_limits": { 00:08:12.663 "rw_ios_per_sec": 0, 00:08:12.663 "rw_mbytes_per_sec": 0, 00:08:12.663 "r_mbytes_per_sec": 0, 00:08:12.663 "w_mbytes_per_sec": 0 00:08:12.663 }, 00:08:12.663 "claimed": false, 00:08:12.663 "zoned": false, 00:08:12.663 "supported_io_types": { 00:08:12.663 "read": true, 00:08:12.663 "write": true, 00:08:12.663 "unmap": false, 00:08:12.663 "flush": false, 00:08:12.663 "reset": true, 00:08:12.663 "nvme_admin": false, 00:08:12.663 "nvme_io": false, 00:08:12.663 "nvme_io_md": false, 00:08:12.663 "write_zeroes": true, 00:08:12.663 "zcopy": false, 00:08:12.663 "get_zone_info": false, 00:08:12.663 "zone_management": false, 00:08:12.663 "zone_append": false, 00:08:12.663 "compare": false, 00:08:12.663 "compare_and_write": false, 00:08:12.663 "abort": false, 00:08:12.663 "seek_hole": false, 00:08:12.663 "seek_data": false, 00:08:12.663 "copy": false, 00:08:12.663 "nvme_iov_md": false 00:08:12.663 }, 00:08:12.663 "memory_domains": [ 00:08:12.663 { 00:08:12.663 "dma_device_id": "system", 00:08:12.663 "dma_device_type": 1 00:08:12.663 }, 00:08:12.663 { 00:08:12.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.663 "dma_device_type": 2 00:08:12.663 }, 00:08:12.663 { 00:08:12.663 "dma_device_id": "system", 00:08:12.663 "dma_device_type": 1 00:08:12.663 }, 00:08:12.663 { 00:08:12.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.663 "dma_device_type": 2 00:08:12.663 } 00:08:12.663 ], 00:08:12.663 "driver_specific": { 00:08:12.663 "raid": { 00:08:12.663 "uuid": "f1d535ce-d259-4b91-8c97-ac657a765f38", 00:08:12.663 "strip_size_kb": 0, 00:08:12.663 "state": "online", 00:08:12.663 "raid_level": "raid1", 00:08:12.663 "superblock": true, 00:08:12.663 "num_base_bdevs": 2, 00:08:12.663 "num_base_bdevs_discovered": 2, 00:08:12.663 "num_base_bdevs_operational": 2, 00:08:12.663 "base_bdevs_list": [ 00:08:12.663 { 00:08:12.663 "name": "pt1", 00:08:12.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.663 "is_configured": true, 00:08:12.663 "data_offset": 2048, 00:08:12.663 "data_size": 63488 00:08:12.663 }, 00:08:12.663 { 00:08:12.663 "name": "pt2", 00:08:12.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.663 "is_configured": true, 00:08:12.663 "data_offset": 2048, 00:08:12.663 "data_size": 63488 00:08:12.663 } 00:08:12.663 ] 00:08:12.663 } 00:08:12.663 } 00:08:12.663 }' 00:08:12.663 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.663 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:12.663 pt2' 00:08:12.663 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.663 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.663 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.663 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:12.663 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.663 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.663 08:44:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.663 08:44:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:12.663 [2024-10-05 08:44:49.061841] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.663 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f1d535ce-d259-4b91-8c97-ac657a765f38 '!=' f1d535ce-d259-4b91-8c97-ac657a765f38 ']' 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.664 [2024-10-05 08:44:49.089614] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.664 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.923 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.923 "name": "raid_bdev1", 00:08:12.923 "uuid": "f1d535ce-d259-4b91-8c97-ac657a765f38", 00:08:12.923 "strip_size_kb": 0, 00:08:12.923 "state": "online", 00:08:12.923 "raid_level": "raid1", 00:08:12.923 "superblock": true, 00:08:12.923 "num_base_bdevs": 2, 00:08:12.923 "num_base_bdevs_discovered": 1, 00:08:12.923 "num_base_bdevs_operational": 1, 00:08:12.924 "base_bdevs_list": [ 00:08:12.924 { 00:08:12.924 "name": null, 00:08:12.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.924 "is_configured": false, 00:08:12.924 "data_offset": 0, 00:08:12.924 "data_size": 63488 00:08:12.924 }, 00:08:12.924 { 00:08:12.924 "name": "pt2", 00:08:12.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.924 "is_configured": true, 00:08:12.924 "data_offset": 2048, 00:08:12.924 "data_size": 63488 00:08:12.924 } 00:08:12.924 ] 00:08:12.924 }' 00:08:12.924 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.924 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.184 [2024-10-05 08:44:49.492909] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.184 [2024-10-05 08:44:49.493001] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.184 [2024-10-05 08:44:49.493087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.184 [2024-10-05 08:44:49.493151] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.184 [2024-10-05 08:44:49.493199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.184 [2024-10-05 08:44:49.564780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:13.184 [2024-10-05 08:44:49.564840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.184 [2024-10-05 08:44:49.564857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:13.184 [2024-10-05 08:44:49.564868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.184 [2024-10-05 08:44:49.567355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.184 [2024-10-05 08:44:49.567424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:13.184 [2024-10-05 08:44:49.567522] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:13.184 [2024-10-05 08:44:49.567584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.184 [2024-10-05 08:44:49.567709] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:13.184 [2024-10-05 08:44:49.567751] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:13.184 [2024-10-05 08:44:49.568028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:13.184 [2024-10-05 08:44:49.568221] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:13.184 [2024-10-05 08:44:49.568261] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:13.184 [2024-10-05 08:44:49.568432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.184 pt2 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.184 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.185 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.185 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.185 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.185 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.185 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.185 "name": "raid_bdev1", 00:08:13.185 "uuid": "f1d535ce-d259-4b91-8c97-ac657a765f38", 00:08:13.185 "strip_size_kb": 0, 00:08:13.185 "state": "online", 00:08:13.185 "raid_level": "raid1", 00:08:13.185 "superblock": true, 00:08:13.185 "num_base_bdevs": 2, 00:08:13.185 "num_base_bdevs_discovered": 1, 00:08:13.185 "num_base_bdevs_operational": 1, 00:08:13.185 "base_bdevs_list": [ 00:08:13.185 { 00:08:13.185 "name": null, 00:08:13.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.185 "is_configured": false, 00:08:13.185 "data_offset": 2048, 00:08:13.185 "data_size": 63488 00:08:13.185 }, 00:08:13.185 { 00:08:13.185 "name": "pt2", 00:08:13.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.185 "is_configured": true, 00:08:13.185 "data_offset": 2048, 00:08:13.185 "data_size": 63488 00:08:13.185 } 00:08:13.185 ] 00:08:13.185 }' 00:08:13.185 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.185 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.755 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.755 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.755 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.755 [2024-10-05 08:44:49.984050] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.755 [2024-10-05 08:44:49.984078] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.755 [2024-10-05 08:44:49.984133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.755 [2024-10-05 08:44:49.984175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.755 [2024-10-05 08:44:49.984183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:13.755 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.755 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:13.755 08:44:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.755 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.755 08:44:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.755 [2024-10-05 08:44:50.035975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.755 [2024-10-05 08:44:50.036056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.755 [2024-10-05 08:44:50.036087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:13.755 [2024-10-05 08:44:50.036114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.755 [2024-10-05 08:44:50.038518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.755 [2024-10-05 08:44:50.038581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.755 [2024-10-05 08:44:50.038670] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:13.755 [2024-10-05 08:44:50.038740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.755 [2024-10-05 08:44:50.038889] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:13.755 [2024-10-05 08:44:50.038944] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.755 [2024-10-05 08:44:50.039005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:13.755 [2024-10-05 08:44:50.039110] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.755 [2024-10-05 08:44:50.039224] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:13.755 [2024-10-05 08:44:50.039260] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:13.755 [2024-10-05 08:44:50.039501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:13.755 [2024-10-05 08:44:50.039683] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:13.755 [2024-10-05 08:44:50.039725] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:13.755 [2024-10-05 08:44:50.039931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.755 pt1 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:13.755 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.756 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.756 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.756 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.756 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.756 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.756 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.756 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.756 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.756 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.756 "name": "raid_bdev1", 00:08:13.756 "uuid": "f1d535ce-d259-4b91-8c97-ac657a765f38", 00:08:13.756 "strip_size_kb": 0, 00:08:13.756 "state": "online", 00:08:13.756 "raid_level": "raid1", 00:08:13.756 "superblock": true, 00:08:13.756 "num_base_bdevs": 2, 00:08:13.756 "num_base_bdevs_discovered": 1, 00:08:13.756 "num_base_bdevs_operational": 1, 00:08:13.756 "base_bdevs_list": [ 00:08:13.756 { 00:08:13.756 "name": null, 00:08:13.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.756 "is_configured": false, 00:08:13.756 "data_offset": 2048, 00:08:13.756 "data_size": 63488 00:08:13.756 }, 00:08:13.756 { 00:08:13.756 "name": "pt2", 00:08:13.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.756 "is_configured": true, 00:08:13.756 "data_offset": 2048, 00:08:13.756 "data_size": 63488 00:08:13.756 } 00:08:13.756 ] 00:08:13.756 }' 00:08:13.756 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.756 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:14.326 [2024-10-05 08:44:50.579274] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f1d535ce-d259-4b91-8c97-ac657a765f38 '!=' f1d535ce-d259-4b91-8c97-ac657a765f38 ']' 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62786 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62786 ']' 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62786 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62786 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.326 killing process with pid 62786 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62786' 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62786 00:08:14.326 [2024-10-05 08:44:50.639277] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.326 [2024-10-05 08:44:50.639360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.326 [2024-10-05 08:44:50.639406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.326 [2024-10-05 08:44:50.639420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:14.326 08:44:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62786 00:08:14.586 [2024-10-05 08:44:50.861671] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.967 08:44:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:15.967 00:08:15.967 real 0m6.255s 00:08:15.967 user 0m9.198s 00:08:15.967 sys 0m1.104s 00:08:15.967 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.967 08:44:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.967 ************************************ 00:08:15.967 END TEST raid_superblock_test 00:08:15.967 ************************************ 00:08:15.967 08:44:52 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:15.967 08:44:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:15.967 08:44:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.967 08:44:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.967 ************************************ 00:08:15.967 START TEST raid_read_error_test 00:08:15.967 ************************************ 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GvGolU126p 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63076 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63076 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63076 ']' 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.967 08:44:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.967 [2024-10-05 08:44:52.389561] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:15.967 [2024-10-05 08:44:52.389678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63076 ] 00:08:16.230 [2024-10-05 08:44:52.558985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.492 [2024-10-05 08:44:52.807616] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.751 [2024-10-05 08:44:53.042656] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.751 [2024-10-05 08:44:53.042695] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.751 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.751 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:16.751 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:16.751 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:16.751 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.751 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.011 BaseBdev1_malloc 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.011 true 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.011 [2024-10-05 08:44:53.263671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:17.011 [2024-10-05 08:44:53.263769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.011 [2024-10-05 08:44:53.263804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:17.011 [2024-10-05 08:44:53.263834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.011 [2024-10-05 08:44:53.266195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.011 [2024-10-05 08:44:53.266266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:17.011 BaseBdev1 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.011 BaseBdev2_malloc 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.011 true 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.011 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.011 [2024-10-05 08:44:53.357467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:17.011 [2024-10-05 08:44:53.357522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.011 [2024-10-05 08:44:53.357540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:17.011 [2024-10-05 08:44:53.357552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.011 [2024-10-05 08:44:53.359868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.012 [2024-10-05 08:44:53.359908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:17.012 BaseBdev2 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.012 [2024-10-05 08:44:53.365532] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.012 [2024-10-05 08:44:53.367646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.012 [2024-10-05 08:44:53.367843] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.012 [2024-10-05 08:44:53.367858] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:17.012 [2024-10-05 08:44:53.368107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:17.012 [2024-10-05 08:44:53.368283] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.012 [2024-10-05 08:44:53.368295] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:17.012 [2024-10-05 08:44:53.368439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.012 "name": "raid_bdev1", 00:08:17.012 "uuid": "14ceb7be-9c95-46c9-b871-cb2672122d37", 00:08:17.012 "strip_size_kb": 0, 00:08:17.012 "state": "online", 00:08:17.012 "raid_level": "raid1", 00:08:17.012 "superblock": true, 00:08:17.012 "num_base_bdevs": 2, 00:08:17.012 "num_base_bdevs_discovered": 2, 00:08:17.012 "num_base_bdevs_operational": 2, 00:08:17.012 "base_bdevs_list": [ 00:08:17.012 { 00:08:17.012 "name": "BaseBdev1", 00:08:17.012 "uuid": "a899bbcf-9476-55b7-8be4-07261541f5d5", 00:08:17.012 "is_configured": true, 00:08:17.012 "data_offset": 2048, 00:08:17.012 "data_size": 63488 00:08:17.012 }, 00:08:17.012 { 00:08:17.012 "name": "BaseBdev2", 00:08:17.012 "uuid": "cf84b460-ae69-52d8-95c8-3287a10dfd48", 00:08:17.012 "is_configured": true, 00:08:17.012 "data_offset": 2048, 00:08:17.012 "data_size": 63488 00:08:17.012 } 00:08:17.012 ] 00:08:17.012 }' 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.012 08:44:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.581 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:17.581 08:44:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:17.581 [2024-10-05 08:44:53.906130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.520 "name": "raid_bdev1", 00:08:18.520 "uuid": "14ceb7be-9c95-46c9-b871-cb2672122d37", 00:08:18.520 "strip_size_kb": 0, 00:08:18.520 "state": "online", 00:08:18.520 "raid_level": "raid1", 00:08:18.520 "superblock": true, 00:08:18.520 "num_base_bdevs": 2, 00:08:18.520 "num_base_bdevs_discovered": 2, 00:08:18.520 "num_base_bdevs_operational": 2, 00:08:18.520 "base_bdevs_list": [ 00:08:18.520 { 00:08:18.520 "name": "BaseBdev1", 00:08:18.520 "uuid": "a899bbcf-9476-55b7-8be4-07261541f5d5", 00:08:18.520 "is_configured": true, 00:08:18.520 "data_offset": 2048, 00:08:18.520 "data_size": 63488 00:08:18.520 }, 00:08:18.520 { 00:08:18.520 "name": "BaseBdev2", 00:08:18.520 "uuid": "cf84b460-ae69-52d8-95c8-3287a10dfd48", 00:08:18.520 "is_configured": true, 00:08:18.520 "data_offset": 2048, 00:08:18.520 "data_size": 63488 00:08:18.520 } 00:08:18.520 ] 00:08:18.520 }' 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.520 08:44:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.089 [2024-10-05 08:44:55.287723] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.089 [2024-10-05 08:44:55.287773] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.089 [2024-10-05 08:44:55.290185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.089 [2024-10-05 08:44:55.290232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.089 [2024-10-05 08:44:55.290315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.089 [2024-10-05 08:44:55.290327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:19.089 { 00:08:19.089 "results": [ 00:08:19.089 { 00:08:19.089 "job": "raid_bdev1", 00:08:19.089 "core_mask": "0x1", 00:08:19.089 "workload": "randrw", 00:08:19.089 "percentage": 50, 00:08:19.089 "status": "finished", 00:08:19.089 "queue_depth": 1, 00:08:19.089 "io_size": 131072, 00:08:19.089 "runtime": 1.382165, 00:08:19.089 "iops": 15059.707053788803, 00:08:19.089 "mibps": 1882.4633817236004, 00:08:19.089 "io_failed": 0, 00:08:19.089 "io_timeout": 0, 00:08:19.089 "avg_latency_us": 63.93033173297306, 00:08:19.089 "min_latency_us": 21.687336244541484, 00:08:19.089 "max_latency_us": 1402.2986899563318 00:08:19.089 } 00:08:19.089 ], 00:08:19.089 "core_count": 1 00:08:19.089 } 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63076 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63076 ']' 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63076 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63076 00:08:19.089 killing process with pid 63076 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63076' 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63076 00:08:19.089 [2024-10-05 08:44:55.337240] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.089 08:44:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63076 00:08:19.090 [2024-10-05 08:44:55.476136] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.471 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GvGolU126p 00:08:20.471 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:20.471 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:20.471 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:20.471 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:20.471 ************************************ 00:08:20.471 END TEST raid_read_error_test 00:08:20.471 ************************************ 00:08:20.471 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.471 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:20.471 08:44:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:20.471 00:08:20.471 real 0m4.601s 00:08:20.471 user 0m5.306s 00:08:20.471 sys 0m0.686s 00:08:20.471 08:44:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.471 08:44:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.471 08:44:56 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:20.471 08:44:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:20.471 08:44:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.471 08:44:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.732 ************************************ 00:08:20.732 START TEST raid_write_error_test 00:08:20.732 ************************************ 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.p0gN7pU4zA 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63197 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63197 00:08:20.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63197 ']' 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.732 08:44:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.732 [2024-10-05 08:44:57.059935] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:20.732 [2024-10-05 08:44:57.060066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63197 ] 00:08:20.992 [2024-10-05 08:44:57.223733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.252 [2024-10-05 08:44:57.471771] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.252 [2024-10-05 08:44:57.692777] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.252 [2024-10-05 08:44:57.692876] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.512 BaseBdev1_malloc 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.512 true 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.512 [2024-10-05 08:44:57.941097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:21.512 [2024-10-05 08:44:57.941223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.512 [2024-10-05 08:44:57.941244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:21.512 [2024-10-05 08:44:57.941257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.512 [2024-10-05 08:44:57.943594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.512 [2024-10-05 08:44:57.943634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:21.512 BaseBdev1 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.512 08:44:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.772 BaseBdev2_malloc 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.772 true 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.772 [2024-10-05 08:44:58.040639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:21.772 [2024-10-05 08:44:58.040694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.772 [2024-10-05 08:44:58.040710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:21.772 [2024-10-05 08:44:58.040722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.772 [2024-10-05 08:44:58.043099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.772 [2024-10-05 08:44:58.043134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:21.772 BaseBdev2 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.772 [2024-10-05 08:44:58.052690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.772 [2024-10-05 08:44:58.054718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:21.772 [2024-10-05 08:44:58.054978] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:21.772 [2024-10-05 08:44:58.054997] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:21.772 [2024-10-05 08:44:58.055227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:21.772 [2024-10-05 08:44:58.055403] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:21.772 [2024-10-05 08:44:58.055413] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:21.772 [2024-10-05 08:44:58.055561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:21.772 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.773 "name": "raid_bdev1", 00:08:21.773 "uuid": "c46e84d1-2f8d-426c-b090-e1187095e63f", 00:08:21.773 "strip_size_kb": 0, 00:08:21.773 "state": "online", 00:08:21.773 "raid_level": "raid1", 00:08:21.773 "superblock": true, 00:08:21.773 "num_base_bdevs": 2, 00:08:21.773 "num_base_bdevs_discovered": 2, 00:08:21.773 "num_base_bdevs_operational": 2, 00:08:21.773 "base_bdevs_list": [ 00:08:21.773 { 00:08:21.773 "name": "BaseBdev1", 00:08:21.773 "uuid": "94037e74-6d86-5630-92a9-39c2a7e540a0", 00:08:21.773 "is_configured": true, 00:08:21.773 "data_offset": 2048, 00:08:21.773 "data_size": 63488 00:08:21.773 }, 00:08:21.773 { 00:08:21.773 "name": "BaseBdev2", 00:08:21.773 "uuid": "99da7a5e-46a3-50e6-bc0a-9df939f18798", 00:08:21.773 "is_configured": true, 00:08:21.773 "data_offset": 2048, 00:08:21.773 "data_size": 63488 00:08:21.773 } 00:08:21.773 ] 00:08:21.773 }' 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.773 08:44:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.030 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:22.030 08:44:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:22.289 [2024-10-05 08:44:58.557245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.228 [2024-10-05 08:44:59.487185] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:23.228 [2024-10-05 08:44:59.487351] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.228 [2024-10-05 08:44:59.487576] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.228 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.228 "name": "raid_bdev1", 00:08:23.228 "uuid": "c46e84d1-2f8d-426c-b090-e1187095e63f", 00:08:23.228 "strip_size_kb": 0, 00:08:23.229 "state": "online", 00:08:23.229 "raid_level": "raid1", 00:08:23.229 "superblock": true, 00:08:23.229 "num_base_bdevs": 2, 00:08:23.229 "num_base_bdevs_discovered": 1, 00:08:23.229 "num_base_bdevs_operational": 1, 00:08:23.229 "base_bdevs_list": [ 00:08:23.229 { 00:08:23.229 "name": null, 00:08:23.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.229 "is_configured": false, 00:08:23.229 "data_offset": 0, 00:08:23.229 "data_size": 63488 00:08:23.229 }, 00:08:23.229 { 00:08:23.229 "name": "BaseBdev2", 00:08:23.229 "uuid": "99da7a5e-46a3-50e6-bc0a-9df939f18798", 00:08:23.229 "is_configured": true, 00:08:23.229 "data_offset": 2048, 00:08:23.229 "data_size": 63488 00:08:23.229 } 00:08:23.229 ] 00:08:23.229 }' 00:08:23.229 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.229 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.499 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:23.499 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.499 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.499 [2024-10-05 08:44:59.944463] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.499 [2024-10-05 08:44:59.944498] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.499 [2024-10-05 08:44:59.947028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.499 [2024-10-05 08:44:59.947075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.499 [2024-10-05 08:44:59.947136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.499 [2024-10-05 08:44:59.947146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:23.499 { 00:08:23.499 "results": [ 00:08:23.499 { 00:08:23.499 "job": "raid_bdev1", 00:08:23.499 "core_mask": "0x1", 00:08:23.499 "workload": "randrw", 00:08:23.499 "percentage": 50, 00:08:23.499 "status": "finished", 00:08:23.499 "queue_depth": 1, 00:08:23.499 "io_size": 131072, 00:08:23.499 "runtime": 1.387842, 00:08:23.499 "iops": 18670.713236809377, 00:08:23.499 "mibps": 2333.839154601172, 00:08:23.499 "io_failed": 0, 00:08:23.499 "io_timeout": 0, 00:08:23.499 "avg_latency_us": 51.11780308494589, 00:08:23.499 "min_latency_us": 20.68122270742358, 00:08:23.499 "max_latency_us": 1309.2890829694322 00:08:23.499 } 00:08:23.499 ], 00:08:23.499 "core_count": 1 00:08:23.499 } 00:08:23.499 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.499 08:44:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63197 00:08:23.499 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63197 ']' 00:08:23.499 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63197 00:08:23.499 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:23.499 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:23.499 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63197 00:08:23.778 killing process with pid 63197 00:08:23.778 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:23.778 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:23.778 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63197' 00:08:23.778 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63197 00:08:23.778 [2024-10-05 08:44:59.990803] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:23.778 08:44:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63197 00:08:23.778 [2024-10-05 08:45:00.133754] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.162 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.p0gN7pU4zA 00:08:25.162 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:25.162 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:25.163 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:25.163 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:25.163 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:25.163 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:25.163 08:45:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:25.163 00:08:25.163 real 0m4.541s 00:08:25.163 user 0m5.242s 00:08:25.163 sys 0m0.639s 00:08:25.163 ************************************ 00:08:25.163 END TEST raid_write_error_test 00:08:25.163 ************************************ 00:08:25.163 08:45:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.163 08:45:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.163 08:45:01 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:25.163 08:45:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:25.163 08:45:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:25.163 08:45:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:25.163 08:45:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.163 08:45:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.163 ************************************ 00:08:25.163 START TEST raid_state_function_test 00:08:25.163 ************************************ 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:25.163 Process raid pid: 63312 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63312 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63312' 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63312 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63312 ']' 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.163 08:45:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.423 [2024-10-05 08:45:01.668642] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:25.423 [2024-10-05 08:45:01.668877] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.423 [2024-10-05 08:45:01.831180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.683 [2024-10-05 08:45:02.076642] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.943 [2024-10-05 08:45:02.304815] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.943 [2024-10-05 08:45:02.304986] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.204 [2024-10-05 08:45:02.487016] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.204 [2024-10-05 08:45:02.487075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.204 [2024-10-05 08:45:02.487085] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.204 [2024-10-05 08:45:02.487094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.204 [2024-10-05 08:45:02.487099] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:26.204 [2024-10-05 08:45:02.487108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.204 "name": "Existed_Raid", 00:08:26.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.204 "strip_size_kb": 64, 00:08:26.204 "state": "configuring", 00:08:26.204 "raid_level": "raid0", 00:08:26.204 "superblock": false, 00:08:26.204 "num_base_bdevs": 3, 00:08:26.204 "num_base_bdevs_discovered": 0, 00:08:26.204 "num_base_bdevs_operational": 3, 00:08:26.204 "base_bdevs_list": [ 00:08:26.204 { 00:08:26.204 "name": "BaseBdev1", 00:08:26.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.204 "is_configured": false, 00:08:26.204 "data_offset": 0, 00:08:26.204 "data_size": 0 00:08:26.204 }, 00:08:26.204 { 00:08:26.204 "name": "BaseBdev2", 00:08:26.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.204 "is_configured": false, 00:08:26.204 "data_offset": 0, 00:08:26.204 "data_size": 0 00:08:26.204 }, 00:08:26.204 { 00:08:26.204 "name": "BaseBdev3", 00:08:26.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.204 "is_configured": false, 00:08:26.204 "data_offset": 0, 00:08:26.204 "data_size": 0 00:08:26.204 } 00:08:26.204 ] 00:08:26.204 }' 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.204 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.465 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:26.465 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.465 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.465 [2024-10-05 08:45:02.882235] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:26.465 [2024-10-05 08:45:02.882326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:26.465 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.465 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.465 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.465 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.465 [2024-10-05 08:45:02.894235] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.465 [2024-10-05 08:45:02.894322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.465 [2024-10-05 08:45:02.894350] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.465 [2024-10-05 08:45:02.894375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.465 [2024-10-05 08:45:02.894394] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:26.465 [2024-10-05 08:45:02.894416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:26.465 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.465 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:26.465 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.465 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.726 [2024-10-05 08:45:02.964732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.726 BaseBdev1 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.726 08:45:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.726 [ 00:08:26.726 { 00:08:26.726 "name": "BaseBdev1", 00:08:26.726 "aliases": [ 00:08:26.726 "0c49c479-4d16-49c4-b9fd-f50f174e8fac" 00:08:26.726 ], 00:08:26.726 "product_name": "Malloc disk", 00:08:26.726 "block_size": 512, 00:08:26.726 "num_blocks": 65536, 00:08:26.726 "uuid": "0c49c479-4d16-49c4-b9fd-f50f174e8fac", 00:08:26.726 "assigned_rate_limits": { 00:08:26.726 "rw_ios_per_sec": 0, 00:08:26.726 "rw_mbytes_per_sec": 0, 00:08:26.726 "r_mbytes_per_sec": 0, 00:08:26.726 "w_mbytes_per_sec": 0 00:08:26.726 }, 00:08:26.726 "claimed": true, 00:08:26.726 "claim_type": "exclusive_write", 00:08:26.726 "zoned": false, 00:08:26.726 "supported_io_types": { 00:08:26.726 "read": true, 00:08:26.726 "write": true, 00:08:26.726 "unmap": true, 00:08:26.726 "flush": true, 00:08:26.726 "reset": true, 00:08:26.726 "nvme_admin": false, 00:08:26.726 "nvme_io": false, 00:08:26.726 "nvme_io_md": false, 00:08:26.726 "write_zeroes": true, 00:08:26.726 "zcopy": true, 00:08:26.726 "get_zone_info": false, 00:08:26.726 "zone_management": false, 00:08:26.726 "zone_append": false, 00:08:26.726 "compare": false, 00:08:26.726 "compare_and_write": false, 00:08:26.726 "abort": true, 00:08:26.726 "seek_hole": false, 00:08:26.726 "seek_data": false, 00:08:26.726 "copy": true, 00:08:26.726 "nvme_iov_md": false 00:08:26.726 }, 00:08:26.726 "memory_domains": [ 00:08:26.726 { 00:08:26.726 "dma_device_id": "system", 00:08:26.726 "dma_device_type": 1 00:08:26.726 }, 00:08:26.726 { 00:08:26.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.726 "dma_device_type": 2 00:08:26.726 } 00:08:26.726 ], 00:08:26.726 "driver_specific": {} 00:08:26.726 } 00:08:26.726 ] 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.726 "name": "Existed_Raid", 00:08:26.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.726 "strip_size_kb": 64, 00:08:26.726 "state": "configuring", 00:08:26.726 "raid_level": "raid0", 00:08:26.726 "superblock": false, 00:08:26.726 "num_base_bdevs": 3, 00:08:26.726 "num_base_bdevs_discovered": 1, 00:08:26.726 "num_base_bdevs_operational": 3, 00:08:26.726 "base_bdevs_list": [ 00:08:26.726 { 00:08:26.726 "name": "BaseBdev1", 00:08:26.726 "uuid": "0c49c479-4d16-49c4-b9fd-f50f174e8fac", 00:08:26.726 "is_configured": true, 00:08:26.726 "data_offset": 0, 00:08:26.726 "data_size": 65536 00:08:26.726 }, 00:08:26.726 { 00:08:26.726 "name": "BaseBdev2", 00:08:26.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.726 "is_configured": false, 00:08:26.726 "data_offset": 0, 00:08:26.726 "data_size": 0 00:08:26.726 }, 00:08:26.726 { 00:08:26.726 "name": "BaseBdev3", 00:08:26.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.726 "is_configured": false, 00:08:26.726 "data_offset": 0, 00:08:26.726 "data_size": 0 00:08:26.726 } 00:08:26.726 ] 00:08:26.726 }' 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.726 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.297 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.297 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.297 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.297 [2024-10-05 08:45:03.475914] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.298 [2024-10-05 08:45:03.476007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.298 [2024-10-05 08:45:03.487910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.298 [2024-10-05 08:45:03.490196] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.298 [2024-10-05 08:45:03.490322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.298 [2024-10-05 08:45:03.490340] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:27.298 [2024-10-05 08:45:03.490352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.298 "name": "Existed_Raid", 00:08:27.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.298 "strip_size_kb": 64, 00:08:27.298 "state": "configuring", 00:08:27.298 "raid_level": "raid0", 00:08:27.298 "superblock": false, 00:08:27.298 "num_base_bdevs": 3, 00:08:27.298 "num_base_bdevs_discovered": 1, 00:08:27.298 "num_base_bdevs_operational": 3, 00:08:27.298 "base_bdevs_list": [ 00:08:27.298 { 00:08:27.298 "name": "BaseBdev1", 00:08:27.298 "uuid": "0c49c479-4d16-49c4-b9fd-f50f174e8fac", 00:08:27.298 "is_configured": true, 00:08:27.298 "data_offset": 0, 00:08:27.298 "data_size": 65536 00:08:27.298 }, 00:08:27.298 { 00:08:27.298 "name": "BaseBdev2", 00:08:27.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.298 "is_configured": false, 00:08:27.298 "data_offset": 0, 00:08:27.298 "data_size": 0 00:08:27.298 }, 00:08:27.298 { 00:08:27.298 "name": "BaseBdev3", 00:08:27.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.298 "is_configured": false, 00:08:27.298 "data_offset": 0, 00:08:27.298 "data_size": 0 00:08:27.298 } 00:08:27.298 ] 00:08:27.298 }' 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.298 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.558 [2024-10-05 08:45:03.931126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.558 BaseBdev2 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.558 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.558 [ 00:08:27.558 { 00:08:27.558 "name": "BaseBdev2", 00:08:27.558 "aliases": [ 00:08:27.558 "d4b60a4d-b771-402c-8330-0a59f6ad6229" 00:08:27.558 ], 00:08:27.558 "product_name": "Malloc disk", 00:08:27.558 "block_size": 512, 00:08:27.558 "num_blocks": 65536, 00:08:27.558 "uuid": "d4b60a4d-b771-402c-8330-0a59f6ad6229", 00:08:27.558 "assigned_rate_limits": { 00:08:27.558 "rw_ios_per_sec": 0, 00:08:27.558 "rw_mbytes_per_sec": 0, 00:08:27.558 "r_mbytes_per_sec": 0, 00:08:27.558 "w_mbytes_per_sec": 0 00:08:27.558 }, 00:08:27.558 "claimed": true, 00:08:27.558 "claim_type": "exclusive_write", 00:08:27.558 "zoned": false, 00:08:27.558 "supported_io_types": { 00:08:27.558 "read": true, 00:08:27.558 "write": true, 00:08:27.558 "unmap": true, 00:08:27.558 "flush": true, 00:08:27.558 "reset": true, 00:08:27.558 "nvme_admin": false, 00:08:27.558 "nvme_io": false, 00:08:27.558 "nvme_io_md": false, 00:08:27.559 "write_zeroes": true, 00:08:27.559 "zcopy": true, 00:08:27.559 "get_zone_info": false, 00:08:27.559 "zone_management": false, 00:08:27.559 "zone_append": false, 00:08:27.559 "compare": false, 00:08:27.559 "compare_and_write": false, 00:08:27.559 "abort": true, 00:08:27.559 "seek_hole": false, 00:08:27.559 "seek_data": false, 00:08:27.559 "copy": true, 00:08:27.559 "nvme_iov_md": false 00:08:27.559 }, 00:08:27.559 "memory_domains": [ 00:08:27.559 { 00:08:27.559 "dma_device_id": "system", 00:08:27.559 "dma_device_type": 1 00:08:27.559 }, 00:08:27.559 { 00:08:27.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.559 "dma_device_type": 2 00:08:27.559 } 00:08:27.559 ], 00:08:27.559 "driver_specific": {} 00:08:27.559 } 00:08:27.559 ] 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.559 08:45:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.559 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.559 "name": "Existed_Raid", 00:08:27.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.559 "strip_size_kb": 64, 00:08:27.559 "state": "configuring", 00:08:27.559 "raid_level": "raid0", 00:08:27.559 "superblock": false, 00:08:27.559 "num_base_bdevs": 3, 00:08:27.559 "num_base_bdevs_discovered": 2, 00:08:27.559 "num_base_bdevs_operational": 3, 00:08:27.559 "base_bdevs_list": [ 00:08:27.559 { 00:08:27.559 "name": "BaseBdev1", 00:08:27.559 "uuid": "0c49c479-4d16-49c4-b9fd-f50f174e8fac", 00:08:27.559 "is_configured": true, 00:08:27.559 "data_offset": 0, 00:08:27.559 "data_size": 65536 00:08:27.559 }, 00:08:27.559 { 00:08:27.559 "name": "BaseBdev2", 00:08:27.559 "uuid": "d4b60a4d-b771-402c-8330-0a59f6ad6229", 00:08:27.559 "is_configured": true, 00:08:27.559 "data_offset": 0, 00:08:27.559 "data_size": 65536 00:08:27.559 }, 00:08:27.559 { 00:08:27.559 "name": "BaseBdev3", 00:08:27.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.559 "is_configured": false, 00:08:27.559 "data_offset": 0, 00:08:27.559 "data_size": 0 00:08:27.559 } 00:08:27.559 ] 00:08:27.559 }' 00:08:27.559 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.559 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.128 [2024-10-05 08:45:04.427692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.128 BaseBdev3 00:08:28.128 [2024-10-05 08:45:04.427793] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:28.128 [2024-10-05 08:45:04.427815] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:28.128 [2024-10-05 08:45:04.428115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:28.128 [2024-10-05 08:45:04.428298] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:28.128 [2024-10-05 08:45:04.428311] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:28.128 [2024-10-05 08:45:04.428583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.128 [ 00:08:28.128 { 00:08:28.128 "name": "BaseBdev3", 00:08:28.128 "aliases": [ 00:08:28.128 "b4357f7c-6a9b-4a28-a2a6-88c4fc9965ab" 00:08:28.128 ], 00:08:28.128 "product_name": "Malloc disk", 00:08:28.128 "block_size": 512, 00:08:28.128 "num_blocks": 65536, 00:08:28.128 "uuid": "b4357f7c-6a9b-4a28-a2a6-88c4fc9965ab", 00:08:28.128 "assigned_rate_limits": { 00:08:28.128 "rw_ios_per_sec": 0, 00:08:28.128 "rw_mbytes_per_sec": 0, 00:08:28.128 "r_mbytes_per_sec": 0, 00:08:28.128 "w_mbytes_per_sec": 0 00:08:28.128 }, 00:08:28.128 "claimed": true, 00:08:28.128 "claim_type": "exclusive_write", 00:08:28.128 "zoned": false, 00:08:28.128 "supported_io_types": { 00:08:28.128 "read": true, 00:08:28.128 "write": true, 00:08:28.128 "unmap": true, 00:08:28.128 "flush": true, 00:08:28.128 "reset": true, 00:08:28.128 "nvme_admin": false, 00:08:28.128 "nvme_io": false, 00:08:28.128 "nvme_io_md": false, 00:08:28.128 "write_zeroes": true, 00:08:28.128 "zcopy": true, 00:08:28.128 "get_zone_info": false, 00:08:28.128 "zone_management": false, 00:08:28.128 "zone_append": false, 00:08:28.128 "compare": false, 00:08:28.128 "compare_and_write": false, 00:08:28.128 "abort": true, 00:08:28.128 "seek_hole": false, 00:08:28.128 "seek_data": false, 00:08:28.128 "copy": true, 00:08:28.128 "nvme_iov_md": false 00:08:28.128 }, 00:08:28.128 "memory_domains": [ 00:08:28.128 { 00:08:28.128 "dma_device_id": "system", 00:08:28.128 "dma_device_type": 1 00:08:28.128 }, 00:08:28.128 { 00:08:28.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.128 "dma_device_type": 2 00:08:28.128 } 00:08:28.128 ], 00:08:28.128 "driver_specific": {} 00:08:28.128 } 00:08:28.128 ] 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.128 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.129 "name": "Existed_Raid", 00:08:28.129 "uuid": "83e86837-1c9b-461f-a6bc-4ced9f22b6aa", 00:08:28.129 "strip_size_kb": 64, 00:08:28.129 "state": "online", 00:08:28.129 "raid_level": "raid0", 00:08:28.129 "superblock": false, 00:08:28.129 "num_base_bdevs": 3, 00:08:28.129 "num_base_bdevs_discovered": 3, 00:08:28.129 "num_base_bdevs_operational": 3, 00:08:28.129 "base_bdevs_list": [ 00:08:28.129 { 00:08:28.129 "name": "BaseBdev1", 00:08:28.129 "uuid": "0c49c479-4d16-49c4-b9fd-f50f174e8fac", 00:08:28.129 "is_configured": true, 00:08:28.129 "data_offset": 0, 00:08:28.129 "data_size": 65536 00:08:28.129 }, 00:08:28.129 { 00:08:28.129 "name": "BaseBdev2", 00:08:28.129 "uuid": "d4b60a4d-b771-402c-8330-0a59f6ad6229", 00:08:28.129 "is_configured": true, 00:08:28.129 "data_offset": 0, 00:08:28.129 "data_size": 65536 00:08:28.129 }, 00:08:28.129 { 00:08:28.129 "name": "BaseBdev3", 00:08:28.129 "uuid": "b4357f7c-6a9b-4a28-a2a6-88c4fc9965ab", 00:08:28.129 "is_configured": true, 00:08:28.129 "data_offset": 0, 00:08:28.129 "data_size": 65536 00:08:28.129 } 00:08:28.129 ] 00:08:28.129 }' 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.129 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.699 [2024-10-05 08:45:04.899212] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.699 "name": "Existed_Raid", 00:08:28.699 "aliases": [ 00:08:28.699 "83e86837-1c9b-461f-a6bc-4ced9f22b6aa" 00:08:28.699 ], 00:08:28.699 "product_name": "Raid Volume", 00:08:28.699 "block_size": 512, 00:08:28.699 "num_blocks": 196608, 00:08:28.699 "uuid": "83e86837-1c9b-461f-a6bc-4ced9f22b6aa", 00:08:28.699 "assigned_rate_limits": { 00:08:28.699 "rw_ios_per_sec": 0, 00:08:28.699 "rw_mbytes_per_sec": 0, 00:08:28.699 "r_mbytes_per_sec": 0, 00:08:28.699 "w_mbytes_per_sec": 0 00:08:28.699 }, 00:08:28.699 "claimed": false, 00:08:28.699 "zoned": false, 00:08:28.699 "supported_io_types": { 00:08:28.699 "read": true, 00:08:28.699 "write": true, 00:08:28.699 "unmap": true, 00:08:28.699 "flush": true, 00:08:28.699 "reset": true, 00:08:28.699 "nvme_admin": false, 00:08:28.699 "nvme_io": false, 00:08:28.699 "nvme_io_md": false, 00:08:28.699 "write_zeroes": true, 00:08:28.699 "zcopy": false, 00:08:28.699 "get_zone_info": false, 00:08:28.699 "zone_management": false, 00:08:28.699 "zone_append": false, 00:08:28.699 "compare": false, 00:08:28.699 "compare_and_write": false, 00:08:28.699 "abort": false, 00:08:28.699 "seek_hole": false, 00:08:28.699 "seek_data": false, 00:08:28.699 "copy": false, 00:08:28.699 "nvme_iov_md": false 00:08:28.699 }, 00:08:28.699 "memory_domains": [ 00:08:28.699 { 00:08:28.699 "dma_device_id": "system", 00:08:28.699 "dma_device_type": 1 00:08:28.699 }, 00:08:28.699 { 00:08:28.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.699 "dma_device_type": 2 00:08:28.699 }, 00:08:28.699 { 00:08:28.699 "dma_device_id": "system", 00:08:28.699 "dma_device_type": 1 00:08:28.699 }, 00:08:28.699 { 00:08:28.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.699 "dma_device_type": 2 00:08:28.699 }, 00:08:28.699 { 00:08:28.699 "dma_device_id": "system", 00:08:28.699 "dma_device_type": 1 00:08:28.699 }, 00:08:28.699 { 00:08:28.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.699 "dma_device_type": 2 00:08:28.699 } 00:08:28.699 ], 00:08:28.699 "driver_specific": { 00:08:28.699 "raid": { 00:08:28.699 "uuid": "83e86837-1c9b-461f-a6bc-4ced9f22b6aa", 00:08:28.699 "strip_size_kb": 64, 00:08:28.699 "state": "online", 00:08:28.699 "raid_level": "raid0", 00:08:28.699 "superblock": false, 00:08:28.699 "num_base_bdevs": 3, 00:08:28.699 "num_base_bdevs_discovered": 3, 00:08:28.699 "num_base_bdevs_operational": 3, 00:08:28.699 "base_bdevs_list": [ 00:08:28.699 { 00:08:28.699 "name": "BaseBdev1", 00:08:28.699 "uuid": "0c49c479-4d16-49c4-b9fd-f50f174e8fac", 00:08:28.699 "is_configured": true, 00:08:28.699 "data_offset": 0, 00:08:28.699 "data_size": 65536 00:08:28.699 }, 00:08:28.699 { 00:08:28.699 "name": "BaseBdev2", 00:08:28.699 "uuid": "d4b60a4d-b771-402c-8330-0a59f6ad6229", 00:08:28.699 "is_configured": true, 00:08:28.699 "data_offset": 0, 00:08:28.699 "data_size": 65536 00:08:28.699 }, 00:08:28.699 { 00:08:28.699 "name": "BaseBdev3", 00:08:28.699 "uuid": "b4357f7c-6a9b-4a28-a2a6-88c4fc9965ab", 00:08:28.699 "is_configured": true, 00:08:28.699 "data_offset": 0, 00:08:28.699 "data_size": 65536 00:08:28.699 } 00:08:28.699 ] 00:08:28.699 } 00:08:28.699 } 00:08:28.699 }' 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:28.699 BaseBdev2 00:08:28.699 BaseBdev3' 00:08:28.699 08:45:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.699 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.699 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.699 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:28.699 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.699 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.699 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.699 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.699 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.699 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.699 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.699 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.700 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:28.700 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.700 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.700 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.700 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.700 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.700 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.700 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:28.700 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.700 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.700 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.700 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.959 [2024-10-05 08:45:05.182427] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:28.959 [2024-10-05 08:45:05.182452] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.959 [2024-10-05 08:45:05.182509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.959 "name": "Existed_Raid", 00:08:28.959 "uuid": "83e86837-1c9b-461f-a6bc-4ced9f22b6aa", 00:08:28.959 "strip_size_kb": 64, 00:08:28.959 "state": "offline", 00:08:28.959 "raid_level": "raid0", 00:08:28.959 "superblock": false, 00:08:28.959 "num_base_bdevs": 3, 00:08:28.959 "num_base_bdevs_discovered": 2, 00:08:28.959 "num_base_bdevs_operational": 2, 00:08:28.959 "base_bdevs_list": [ 00:08:28.959 { 00:08:28.959 "name": null, 00:08:28.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.959 "is_configured": false, 00:08:28.959 "data_offset": 0, 00:08:28.959 "data_size": 65536 00:08:28.959 }, 00:08:28.959 { 00:08:28.959 "name": "BaseBdev2", 00:08:28.959 "uuid": "d4b60a4d-b771-402c-8330-0a59f6ad6229", 00:08:28.959 "is_configured": true, 00:08:28.959 "data_offset": 0, 00:08:28.959 "data_size": 65536 00:08:28.959 }, 00:08:28.959 { 00:08:28.959 "name": "BaseBdev3", 00:08:28.959 "uuid": "b4357f7c-6a9b-4a28-a2a6-88c4fc9965ab", 00:08:28.959 "is_configured": true, 00:08:28.959 "data_offset": 0, 00:08:28.959 "data_size": 65536 00:08:28.959 } 00:08:28.959 ] 00:08:28.959 }' 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.959 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.529 [2024-10-05 08:45:05.813904] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.529 08:45:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.529 [2024-10-05 08:45:05.975988] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:29.529 [2024-10-05 08:45:05.976047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.789 BaseBdev2 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:29.789 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.790 [ 00:08:29.790 { 00:08:29.790 "name": "BaseBdev2", 00:08:29.790 "aliases": [ 00:08:29.790 "1e29dda1-5e67-4180-b42b-4b962e378937" 00:08:29.790 ], 00:08:29.790 "product_name": "Malloc disk", 00:08:29.790 "block_size": 512, 00:08:29.790 "num_blocks": 65536, 00:08:29.790 "uuid": "1e29dda1-5e67-4180-b42b-4b962e378937", 00:08:29.790 "assigned_rate_limits": { 00:08:29.790 "rw_ios_per_sec": 0, 00:08:29.790 "rw_mbytes_per_sec": 0, 00:08:29.790 "r_mbytes_per_sec": 0, 00:08:29.790 "w_mbytes_per_sec": 0 00:08:29.790 }, 00:08:29.790 "claimed": false, 00:08:29.790 "zoned": false, 00:08:29.790 "supported_io_types": { 00:08:29.790 "read": true, 00:08:29.790 "write": true, 00:08:29.790 "unmap": true, 00:08:29.790 "flush": true, 00:08:29.790 "reset": true, 00:08:29.790 "nvme_admin": false, 00:08:29.790 "nvme_io": false, 00:08:29.790 "nvme_io_md": false, 00:08:29.790 "write_zeroes": true, 00:08:29.790 "zcopy": true, 00:08:29.790 "get_zone_info": false, 00:08:29.790 "zone_management": false, 00:08:29.790 "zone_append": false, 00:08:29.790 "compare": false, 00:08:29.790 "compare_and_write": false, 00:08:29.790 "abort": true, 00:08:29.790 "seek_hole": false, 00:08:29.790 "seek_data": false, 00:08:29.790 "copy": true, 00:08:29.790 "nvme_iov_md": false 00:08:29.790 }, 00:08:29.790 "memory_domains": [ 00:08:29.790 { 00:08:29.790 "dma_device_id": "system", 00:08:29.790 "dma_device_type": 1 00:08:29.790 }, 00:08:29.790 { 00:08:29.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.790 "dma_device_type": 2 00:08:29.790 } 00:08:29.790 ], 00:08:29.790 "driver_specific": {} 00:08:29.790 } 00:08:29.790 ] 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.790 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.049 BaseBdev3 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.049 [ 00:08:30.049 { 00:08:30.049 "name": "BaseBdev3", 00:08:30.049 "aliases": [ 00:08:30.049 "76a344e1-12ce-412b-8642-9bdb9f34e3d3" 00:08:30.049 ], 00:08:30.049 "product_name": "Malloc disk", 00:08:30.049 "block_size": 512, 00:08:30.049 "num_blocks": 65536, 00:08:30.049 "uuid": "76a344e1-12ce-412b-8642-9bdb9f34e3d3", 00:08:30.049 "assigned_rate_limits": { 00:08:30.049 "rw_ios_per_sec": 0, 00:08:30.049 "rw_mbytes_per_sec": 0, 00:08:30.049 "r_mbytes_per_sec": 0, 00:08:30.049 "w_mbytes_per_sec": 0 00:08:30.049 }, 00:08:30.049 "claimed": false, 00:08:30.049 "zoned": false, 00:08:30.049 "supported_io_types": { 00:08:30.049 "read": true, 00:08:30.049 "write": true, 00:08:30.049 "unmap": true, 00:08:30.049 "flush": true, 00:08:30.049 "reset": true, 00:08:30.049 "nvme_admin": false, 00:08:30.049 "nvme_io": false, 00:08:30.049 "nvme_io_md": false, 00:08:30.049 "write_zeroes": true, 00:08:30.049 "zcopy": true, 00:08:30.049 "get_zone_info": false, 00:08:30.049 "zone_management": false, 00:08:30.049 "zone_append": false, 00:08:30.049 "compare": false, 00:08:30.049 "compare_and_write": false, 00:08:30.049 "abort": true, 00:08:30.049 "seek_hole": false, 00:08:30.049 "seek_data": false, 00:08:30.049 "copy": true, 00:08:30.049 "nvme_iov_md": false 00:08:30.049 }, 00:08:30.049 "memory_domains": [ 00:08:30.049 { 00:08:30.049 "dma_device_id": "system", 00:08:30.049 "dma_device_type": 1 00:08:30.049 }, 00:08:30.049 { 00:08:30.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.049 "dma_device_type": 2 00:08:30.049 } 00:08:30.049 ], 00:08:30.049 "driver_specific": {} 00:08:30.049 } 00:08:30.049 ] 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.049 [2024-10-05 08:45:06.309921] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.049 [2024-10-05 08:45:06.310027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.049 [2024-10-05 08:45:06.310074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.049 [2024-10-05 08:45:06.312059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.049 "name": "Existed_Raid", 00:08:30.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.049 "strip_size_kb": 64, 00:08:30.049 "state": "configuring", 00:08:30.049 "raid_level": "raid0", 00:08:30.049 "superblock": false, 00:08:30.049 "num_base_bdevs": 3, 00:08:30.049 "num_base_bdevs_discovered": 2, 00:08:30.049 "num_base_bdevs_operational": 3, 00:08:30.049 "base_bdevs_list": [ 00:08:30.049 { 00:08:30.049 "name": "BaseBdev1", 00:08:30.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.049 "is_configured": false, 00:08:30.049 "data_offset": 0, 00:08:30.049 "data_size": 0 00:08:30.049 }, 00:08:30.049 { 00:08:30.049 "name": "BaseBdev2", 00:08:30.049 "uuid": "1e29dda1-5e67-4180-b42b-4b962e378937", 00:08:30.049 "is_configured": true, 00:08:30.049 "data_offset": 0, 00:08:30.049 "data_size": 65536 00:08:30.049 }, 00:08:30.049 { 00:08:30.049 "name": "BaseBdev3", 00:08:30.049 "uuid": "76a344e1-12ce-412b-8642-9bdb9f34e3d3", 00:08:30.049 "is_configured": true, 00:08:30.049 "data_offset": 0, 00:08:30.049 "data_size": 65536 00:08:30.049 } 00:08:30.049 ] 00:08:30.049 }' 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.049 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.309 [2024-10-05 08:45:06.753096] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.309 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.568 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.568 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.568 "name": "Existed_Raid", 00:08:30.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.568 "strip_size_kb": 64, 00:08:30.568 "state": "configuring", 00:08:30.568 "raid_level": "raid0", 00:08:30.568 "superblock": false, 00:08:30.568 "num_base_bdevs": 3, 00:08:30.568 "num_base_bdevs_discovered": 1, 00:08:30.568 "num_base_bdevs_operational": 3, 00:08:30.568 "base_bdevs_list": [ 00:08:30.568 { 00:08:30.568 "name": "BaseBdev1", 00:08:30.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.568 "is_configured": false, 00:08:30.568 "data_offset": 0, 00:08:30.568 "data_size": 0 00:08:30.568 }, 00:08:30.568 { 00:08:30.568 "name": null, 00:08:30.568 "uuid": "1e29dda1-5e67-4180-b42b-4b962e378937", 00:08:30.568 "is_configured": false, 00:08:30.568 "data_offset": 0, 00:08:30.568 "data_size": 65536 00:08:30.568 }, 00:08:30.568 { 00:08:30.568 "name": "BaseBdev3", 00:08:30.568 "uuid": "76a344e1-12ce-412b-8642-9bdb9f34e3d3", 00:08:30.568 "is_configured": true, 00:08:30.568 "data_offset": 0, 00:08:30.568 "data_size": 65536 00:08:30.568 } 00:08:30.568 ] 00:08:30.568 }' 00:08:30.568 08:45:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.568 08:45:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.828 [2024-10-05 08:45:07.293893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.828 BaseBdev1 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.828 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.088 [ 00:08:31.088 { 00:08:31.088 "name": "BaseBdev1", 00:08:31.088 "aliases": [ 00:08:31.088 "8b5c66f8-55c9-4ebe-a105-c05786716f9a" 00:08:31.088 ], 00:08:31.088 "product_name": "Malloc disk", 00:08:31.088 "block_size": 512, 00:08:31.088 "num_blocks": 65536, 00:08:31.088 "uuid": "8b5c66f8-55c9-4ebe-a105-c05786716f9a", 00:08:31.088 "assigned_rate_limits": { 00:08:31.088 "rw_ios_per_sec": 0, 00:08:31.088 "rw_mbytes_per_sec": 0, 00:08:31.088 "r_mbytes_per_sec": 0, 00:08:31.088 "w_mbytes_per_sec": 0 00:08:31.088 }, 00:08:31.088 "claimed": true, 00:08:31.088 "claim_type": "exclusive_write", 00:08:31.088 "zoned": false, 00:08:31.088 "supported_io_types": { 00:08:31.088 "read": true, 00:08:31.088 "write": true, 00:08:31.088 "unmap": true, 00:08:31.088 "flush": true, 00:08:31.088 "reset": true, 00:08:31.088 "nvme_admin": false, 00:08:31.088 "nvme_io": false, 00:08:31.088 "nvme_io_md": false, 00:08:31.088 "write_zeroes": true, 00:08:31.088 "zcopy": true, 00:08:31.088 "get_zone_info": false, 00:08:31.088 "zone_management": false, 00:08:31.088 "zone_append": false, 00:08:31.088 "compare": false, 00:08:31.088 "compare_and_write": false, 00:08:31.088 "abort": true, 00:08:31.088 "seek_hole": false, 00:08:31.088 "seek_data": false, 00:08:31.088 "copy": true, 00:08:31.088 "nvme_iov_md": false 00:08:31.088 }, 00:08:31.088 "memory_domains": [ 00:08:31.088 { 00:08:31.088 "dma_device_id": "system", 00:08:31.088 "dma_device_type": 1 00:08:31.088 }, 00:08:31.088 { 00:08:31.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.088 "dma_device_type": 2 00:08:31.088 } 00:08:31.088 ], 00:08:31.088 "driver_specific": {} 00:08:31.088 } 00:08:31.088 ] 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.088 "name": "Existed_Raid", 00:08:31.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.088 "strip_size_kb": 64, 00:08:31.088 "state": "configuring", 00:08:31.088 "raid_level": "raid0", 00:08:31.088 "superblock": false, 00:08:31.088 "num_base_bdevs": 3, 00:08:31.088 "num_base_bdevs_discovered": 2, 00:08:31.088 "num_base_bdevs_operational": 3, 00:08:31.088 "base_bdevs_list": [ 00:08:31.088 { 00:08:31.088 "name": "BaseBdev1", 00:08:31.088 "uuid": "8b5c66f8-55c9-4ebe-a105-c05786716f9a", 00:08:31.088 "is_configured": true, 00:08:31.088 "data_offset": 0, 00:08:31.088 "data_size": 65536 00:08:31.088 }, 00:08:31.088 { 00:08:31.088 "name": null, 00:08:31.088 "uuid": "1e29dda1-5e67-4180-b42b-4b962e378937", 00:08:31.088 "is_configured": false, 00:08:31.088 "data_offset": 0, 00:08:31.088 "data_size": 65536 00:08:31.088 }, 00:08:31.088 { 00:08:31.088 "name": "BaseBdev3", 00:08:31.088 "uuid": "76a344e1-12ce-412b-8642-9bdb9f34e3d3", 00:08:31.088 "is_configured": true, 00:08:31.088 "data_offset": 0, 00:08:31.088 "data_size": 65536 00:08:31.088 } 00:08:31.088 ] 00:08:31.088 }' 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.088 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.348 [2024-10-05 08:45:07.749166] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.348 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.349 "name": "Existed_Raid", 00:08:31.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.349 "strip_size_kb": 64, 00:08:31.349 "state": "configuring", 00:08:31.349 "raid_level": "raid0", 00:08:31.349 "superblock": false, 00:08:31.349 "num_base_bdevs": 3, 00:08:31.349 "num_base_bdevs_discovered": 1, 00:08:31.349 "num_base_bdevs_operational": 3, 00:08:31.349 "base_bdevs_list": [ 00:08:31.349 { 00:08:31.349 "name": "BaseBdev1", 00:08:31.349 "uuid": "8b5c66f8-55c9-4ebe-a105-c05786716f9a", 00:08:31.349 "is_configured": true, 00:08:31.349 "data_offset": 0, 00:08:31.349 "data_size": 65536 00:08:31.349 }, 00:08:31.349 { 00:08:31.349 "name": null, 00:08:31.349 "uuid": "1e29dda1-5e67-4180-b42b-4b962e378937", 00:08:31.349 "is_configured": false, 00:08:31.349 "data_offset": 0, 00:08:31.349 "data_size": 65536 00:08:31.349 }, 00:08:31.349 { 00:08:31.349 "name": null, 00:08:31.349 "uuid": "76a344e1-12ce-412b-8642-9bdb9f34e3d3", 00:08:31.349 "is_configured": false, 00:08:31.349 "data_offset": 0, 00:08:31.349 "data_size": 65536 00:08:31.349 } 00:08:31.349 ] 00:08:31.349 }' 00:08:31.349 08:45:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.349 08:45:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.945 [2024-10-05 08:45:08.208464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.945 "name": "Existed_Raid", 00:08:31.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.945 "strip_size_kb": 64, 00:08:31.945 "state": "configuring", 00:08:31.945 "raid_level": "raid0", 00:08:31.945 "superblock": false, 00:08:31.945 "num_base_bdevs": 3, 00:08:31.945 "num_base_bdevs_discovered": 2, 00:08:31.945 "num_base_bdevs_operational": 3, 00:08:31.945 "base_bdevs_list": [ 00:08:31.945 { 00:08:31.945 "name": "BaseBdev1", 00:08:31.945 "uuid": "8b5c66f8-55c9-4ebe-a105-c05786716f9a", 00:08:31.945 "is_configured": true, 00:08:31.945 "data_offset": 0, 00:08:31.945 "data_size": 65536 00:08:31.945 }, 00:08:31.945 { 00:08:31.945 "name": null, 00:08:31.945 "uuid": "1e29dda1-5e67-4180-b42b-4b962e378937", 00:08:31.945 "is_configured": false, 00:08:31.945 "data_offset": 0, 00:08:31.945 "data_size": 65536 00:08:31.945 }, 00:08:31.945 { 00:08:31.945 "name": "BaseBdev3", 00:08:31.945 "uuid": "76a344e1-12ce-412b-8642-9bdb9f34e3d3", 00:08:31.945 "is_configured": true, 00:08:31.945 "data_offset": 0, 00:08:31.945 "data_size": 65536 00:08:31.945 } 00:08:31.945 ] 00:08:31.945 }' 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.945 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.204 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.204 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.204 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.204 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:32.204 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.462 [2024-10-05 08:45:08.695648] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.462 "name": "Existed_Raid", 00:08:32.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.462 "strip_size_kb": 64, 00:08:32.462 "state": "configuring", 00:08:32.462 "raid_level": "raid0", 00:08:32.462 "superblock": false, 00:08:32.462 "num_base_bdevs": 3, 00:08:32.462 "num_base_bdevs_discovered": 1, 00:08:32.462 "num_base_bdevs_operational": 3, 00:08:32.462 "base_bdevs_list": [ 00:08:32.462 { 00:08:32.462 "name": null, 00:08:32.462 "uuid": "8b5c66f8-55c9-4ebe-a105-c05786716f9a", 00:08:32.462 "is_configured": false, 00:08:32.462 "data_offset": 0, 00:08:32.462 "data_size": 65536 00:08:32.462 }, 00:08:32.462 { 00:08:32.462 "name": null, 00:08:32.462 "uuid": "1e29dda1-5e67-4180-b42b-4b962e378937", 00:08:32.462 "is_configured": false, 00:08:32.462 "data_offset": 0, 00:08:32.462 "data_size": 65536 00:08:32.462 }, 00:08:32.462 { 00:08:32.462 "name": "BaseBdev3", 00:08:32.462 "uuid": "76a344e1-12ce-412b-8642-9bdb9f34e3d3", 00:08:32.462 "is_configured": true, 00:08:32.462 "data_offset": 0, 00:08:32.462 "data_size": 65536 00:08:32.462 } 00:08:32.462 ] 00:08:32.462 }' 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.462 08:45:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.032 [2024-10-05 08:45:09.246487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.032 "name": "Existed_Raid", 00:08:33.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.032 "strip_size_kb": 64, 00:08:33.032 "state": "configuring", 00:08:33.032 "raid_level": "raid0", 00:08:33.032 "superblock": false, 00:08:33.032 "num_base_bdevs": 3, 00:08:33.032 "num_base_bdevs_discovered": 2, 00:08:33.032 "num_base_bdevs_operational": 3, 00:08:33.032 "base_bdevs_list": [ 00:08:33.032 { 00:08:33.032 "name": null, 00:08:33.032 "uuid": "8b5c66f8-55c9-4ebe-a105-c05786716f9a", 00:08:33.032 "is_configured": false, 00:08:33.032 "data_offset": 0, 00:08:33.032 "data_size": 65536 00:08:33.032 }, 00:08:33.032 { 00:08:33.032 "name": "BaseBdev2", 00:08:33.032 "uuid": "1e29dda1-5e67-4180-b42b-4b962e378937", 00:08:33.032 "is_configured": true, 00:08:33.032 "data_offset": 0, 00:08:33.032 "data_size": 65536 00:08:33.032 }, 00:08:33.032 { 00:08:33.032 "name": "BaseBdev3", 00:08:33.032 "uuid": "76a344e1-12ce-412b-8642-9bdb9f34e3d3", 00:08:33.032 "is_configured": true, 00:08:33.032 "data_offset": 0, 00:08:33.032 "data_size": 65536 00:08:33.032 } 00:08:33.032 ] 00:08:33.032 }' 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.032 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.292 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.292 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:33.292 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.292 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.292 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.292 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:33.292 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:33.292 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.292 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.292 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.292 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.552 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8b5c66f8-55c9-4ebe-a105-c05786716f9a 00:08:33.552 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.552 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.552 [2024-10-05 08:45:09.808280] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:33.552 [2024-10-05 08:45:09.808321] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:33.552 [2024-10-05 08:45:09.808332] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:33.552 [2024-10-05 08:45:09.808637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:33.552 [2024-10-05 08:45:09.808806] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:33.552 [2024-10-05 08:45:09.808815] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:33.552 [2024-10-05 08:45:09.809116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.552 NewBaseBdev 00:08:33.552 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.552 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:33.552 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:33.552 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.552 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:33.552 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.552 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.553 [ 00:08:33.553 { 00:08:33.553 "name": "NewBaseBdev", 00:08:33.553 "aliases": [ 00:08:33.553 "8b5c66f8-55c9-4ebe-a105-c05786716f9a" 00:08:33.553 ], 00:08:33.553 "product_name": "Malloc disk", 00:08:33.553 "block_size": 512, 00:08:33.553 "num_blocks": 65536, 00:08:33.553 "uuid": "8b5c66f8-55c9-4ebe-a105-c05786716f9a", 00:08:33.553 "assigned_rate_limits": { 00:08:33.553 "rw_ios_per_sec": 0, 00:08:33.553 "rw_mbytes_per_sec": 0, 00:08:33.553 "r_mbytes_per_sec": 0, 00:08:33.553 "w_mbytes_per_sec": 0 00:08:33.553 }, 00:08:33.553 "claimed": true, 00:08:33.553 "claim_type": "exclusive_write", 00:08:33.553 "zoned": false, 00:08:33.553 "supported_io_types": { 00:08:33.553 "read": true, 00:08:33.553 "write": true, 00:08:33.553 "unmap": true, 00:08:33.553 "flush": true, 00:08:33.553 "reset": true, 00:08:33.553 "nvme_admin": false, 00:08:33.553 "nvme_io": false, 00:08:33.553 "nvme_io_md": false, 00:08:33.553 "write_zeroes": true, 00:08:33.553 "zcopy": true, 00:08:33.553 "get_zone_info": false, 00:08:33.553 "zone_management": false, 00:08:33.553 "zone_append": false, 00:08:33.553 "compare": false, 00:08:33.553 "compare_and_write": false, 00:08:33.553 "abort": true, 00:08:33.553 "seek_hole": false, 00:08:33.553 "seek_data": false, 00:08:33.553 "copy": true, 00:08:33.553 "nvme_iov_md": false 00:08:33.553 }, 00:08:33.553 "memory_domains": [ 00:08:33.553 { 00:08:33.553 "dma_device_id": "system", 00:08:33.553 "dma_device_type": 1 00:08:33.553 }, 00:08:33.553 { 00:08:33.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.553 "dma_device_type": 2 00:08:33.553 } 00:08:33.553 ], 00:08:33.553 "driver_specific": {} 00:08:33.553 } 00:08:33.553 ] 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.553 "name": "Existed_Raid", 00:08:33.553 "uuid": "15a7f29e-2e96-4d0c-8f06-696c659da565", 00:08:33.553 "strip_size_kb": 64, 00:08:33.553 "state": "online", 00:08:33.553 "raid_level": "raid0", 00:08:33.553 "superblock": false, 00:08:33.553 "num_base_bdevs": 3, 00:08:33.553 "num_base_bdevs_discovered": 3, 00:08:33.553 "num_base_bdevs_operational": 3, 00:08:33.553 "base_bdevs_list": [ 00:08:33.553 { 00:08:33.553 "name": "NewBaseBdev", 00:08:33.553 "uuid": "8b5c66f8-55c9-4ebe-a105-c05786716f9a", 00:08:33.553 "is_configured": true, 00:08:33.553 "data_offset": 0, 00:08:33.553 "data_size": 65536 00:08:33.553 }, 00:08:33.553 { 00:08:33.553 "name": "BaseBdev2", 00:08:33.553 "uuid": "1e29dda1-5e67-4180-b42b-4b962e378937", 00:08:33.553 "is_configured": true, 00:08:33.553 "data_offset": 0, 00:08:33.553 "data_size": 65536 00:08:33.553 }, 00:08:33.553 { 00:08:33.553 "name": "BaseBdev3", 00:08:33.553 "uuid": "76a344e1-12ce-412b-8642-9bdb9f34e3d3", 00:08:33.553 "is_configured": true, 00:08:33.553 "data_offset": 0, 00:08:33.553 "data_size": 65536 00:08:33.553 } 00:08:33.553 ] 00:08:33.553 }' 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.553 08:45:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.812 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:33.812 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:33.812 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.812 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.812 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.812 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.812 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:33.812 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.812 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.812 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.812 [2024-10-05 08:45:10.263827] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.072 "name": "Existed_Raid", 00:08:34.072 "aliases": [ 00:08:34.072 "15a7f29e-2e96-4d0c-8f06-696c659da565" 00:08:34.072 ], 00:08:34.072 "product_name": "Raid Volume", 00:08:34.072 "block_size": 512, 00:08:34.072 "num_blocks": 196608, 00:08:34.072 "uuid": "15a7f29e-2e96-4d0c-8f06-696c659da565", 00:08:34.072 "assigned_rate_limits": { 00:08:34.072 "rw_ios_per_sec": 0, 00:08:34.072 "rw_mbytes_per_sec": 0, 00:08:34.072 "r_mbytes_per_sec": 0, 00:08:34.072 "w_mbytes_per_sec": 0 00:08:34.072 }, 00:08:34.072 "claimed": false, 00:08:34.072 "zoned": false, 00:08:34.072 "supported_io_types": { 00:08:34.072 "read": true, 00:08:34.072 "write": true, 00:08:34.072 "unmap": true, 00:08:34.072 "flush": true, 00:08:34.072 "reset": true, 00:08:34.072 "nvme_admin": false, 00:08:34.072 "nvme_io": false, 00:08:34.072 "nvme_io_md": false, 00:08:34.072 "write_zeroes": true, 00:08:34.072 "zcopy": false, 00:08:34.072 "get_zone_info": false, 00:08:34.072 "zone_management": false, 00:08:34.072 "zone_append": false, 00:08:34.072 "compare": false, 00:08:34.072 "compare_and_write": false, 00:08:34.072 "abort": false, 00:08:34.072 "seek_hole": false, 00:08:34.072 "seek_data": false, 00:08:34.072 "copy": false, 00:08:34.072 "nvme_iov_md": false 00:08:34.072 }, 00:08:34.072 "memory_domains": [ 00:08:34.072 { 00:08:34.072 "dma_device_id": "system", 00:08:34.072 "dma_device_type": 1 00:08:34.072 }, 00:08:34.072 { 00:08:34.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.072 "dma_device_type": 2 00:08:34.072 }, 00:08:34.072 { 00:08:34.072 "dma_device_id": "system", 00:08:34.072 "dma_device_type": 1 00:08:34.072 }, 00:08:34.072 { 00:08:34.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.072 "dma_device_type": 2 00:08:34.072 }, 00:08:34.072 { 00:08:34.072 "dma_device_id": "system", 00:08:34.072 "dma_device_type": 1 00:08:34.072 }, 00:08:34.072 { 00:08:34.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.072 "dma_device_type": 2 00:08:34.072 } 00:08:34.072 ], 00:08:34.072 "driver_specific": { 00:08:34.072 "raid": { 00:08:34.072 "uuid": "15a7f29e-2e96-4d0c-8f06-696c659da565", 00:08:34.072 "strip_size_kb": 64, 00:08:34.072 "state": "online", 00:08:34.072 "raid_level": "raid0", 00:08:34.072 "superblock": false, 00:08:34.072 "num_base_bdevs": 3, 00:08:34.072 "num_base_bdevs_discovered": 3, 00:08:34.072 "num_base_bdevs_operational": 3, 00:08:34.072 "base_bdevs_list": [ 00:08:34.072 { 00:08:34.072 "name": "NewBaseBdev", 00:08:34.072 "uuid": "8b5c66f8-55c9-4ebe-a105-c05786716f9a", 00:08:34.072 "is_configured": true, 00:08:34.072 "data_offset": 0, 00:08:34.072 "data_size": 65536 00:08:34.072 }, 00:08:34.072 { 00:08:34.072 "name": "BaseBdev2", 00:08:34.072 "uuid": "1e29dda1-5e67-4180-b42b-4b962e378937", 00:08:34.072 "is_configured": true, 00:08:34.072 "data_offset": 0, 00:08:34.072 "data_size": 65536 00:08:34.072 }, 00:08:34.072 { 00:08:34.072 "name": "BaseBdev3", 00:08:34.072 "uuid": "76a344e1-12ce-412b-8642-9bdb9f34e3d3", 00:08:34.072 "is_configured": true, 00:08:34.072 "data_offset": 0, 00:08:34.072 "data_size": 65536 00:08:34.072 } 00:08:34.072 ] 00:08:34.072 } 00:08:34.072 } 00:08:34.072 }' 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:34.072 BaseBdev2 00:08:34.072 BaseBdev3' 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.072 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.073 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.073 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.073 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:34.073 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.073 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.073 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.073 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.332 [2024-10-05 08:45:10.555009] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.332 [2024-10-05 08:45:10.555034] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.332 [2024-10-05 08:45:10.555113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.332 [2024-10-05 08:45:10.555171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.332 [2024-10-05 08:45:10.555184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63312 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63312 ']' 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63312 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63312 00:08:34.332 killing process with pid 63312 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63312' 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63312 00:08:34.332 [2024-10-05 08:45:10.604431] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.332 08:45:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63312 00:08:34.592 [2024-10-05 08:45:10.914899] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.971 08:45:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:35.971 00:08:35.971 real 0m10.676s 00:08:35.971 user 0m16.624s 00:08:35.971 sys 0m1.950s 00:08:35.971 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.971 08:45:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.971 ************************************ 00:08:35.971 END TEST raid_state_function_test 00:08:35.971 ************************************ 00:08:35.971 08:45:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:35.971 08:45:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:35.971 08:45:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.971 08:45:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.971 ************************************ 00:08:35.971 START TEST raid_state_function_test_sb 00:08:35.971 ************************************ 00:08:35.971 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:35.971 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:35.971 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:35.971 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:35.971 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63873 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:35.972 Process raid pid: 63873 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63873' 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63873 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 63873 ']' 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.972 08:45:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.972 [2024-10-05 08:45:12.415587] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:35.972 [2024-10-05 08:45:12.415793] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.232 [2024-10-05 08:45:12.580361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.491 [2024-10-05 08:45:12.824135] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.751 [2024-10-05 08:45:13.048291] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.751 [2024-10-05 08:45:13.048429] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.011 [2024-10-05 08:45:13.233195] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.011 [2024-10-05 08:45:13.233255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.011 [2024-10-05 08:45:13.233265] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.011 [2024-10-05 08:45:13.233276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.011 [2024-10-05 08:45:13.233282] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.011 [2024-10-05 08:45:13.233291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.011 "name": "Existed_Raid", 00:08:37.011 "uuid": "df3a0207-8b30-4207-9ed9-ce24ea39ba45", 00:08:37.011 "strip_size_kb": 64, 00:08:37.011 "state": "configuring", 00:08:37.011 "raid_level": "raid0", 00:08:37.011 "superblock": true, 00:08:37.011 "num_base_bdevs": 3, 00:08:37.011 "num_base_bdevs_discovered": 0, 00:08:37.011 "num_base_bdevs_operational": 3, 00:08:37.011 "base_bdevs_list": [ 00:08:37.011 { 00:08:37.011 "name": "BaseBdev1", 00:08:37.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.011 "is_configured": false, 00:08:37.011 "data_offset": 0, 00:08:37.011 "data_size": 0 00:08:37.011 }, 00:08:37.011 { 00:08:37.011 "name": "BaseBdev2", 00:08:37.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.011 "is_configured": false, 00:08:37.011 "data_offset": 0, 00:08:37.011 "data_size": 0 00:08:37.011 }, 00:08:37.011 { 00:08:37.011 "name": "BaseBdev3", 00:08:37.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.011 "is_configured": false, 00:08:37.011 "data_offset": 0, 00:08:37.011 "data_size": 0 00:08:37.011 } 00:08:37.011 ] 00:08:37.011 }' 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.011 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.272 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.272 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.272 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.272 [2024-10-05 08:45:13.656350] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.272 [2024-10-05 08:45:13.656444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:37.272 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.272 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.272 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.272 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.272 [2024-10-05 08:45:13.668372] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:37.272 [2024-10-05 08:45:13.668450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:37.272 [2024-10-05 08:45:13.668476] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.272 [2024-10-05 08:45:13.668498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.272 [2024-10-05 08:45:13.668515] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.272 [2024-10-05 08:45:13.668536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.272 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.272 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.272 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.272 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.532 [2024-10-05 08:45:13.753888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.532 BaseBdev1 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.532 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.533 [ 00:08:37.533 { 00:08:37.533 "name": "BaseBdev1", 00:08:37.533 "aliases": [ 00:08:37.533 "1d342f21-74bb-4afe-afb2-a1bdbcf8384a" 00:08:37.533 ], 00:08:37.533 "product_name": "Malloc disk", 00:08:37.533 "block_size": 512, 00:08:37.533 "num_blocks": 65536, 00:08:37.533 "uuid": "1d342f21-74bb-4afe-afb2-a1bdbcf8384a", 00:08:37.533 "assigned_rate_limits": { 00:08:37.533 "rw_ios_per_sec": 0, 00:08:37.533 "rw_mbytes_per_sec": 0, 00:08:37.533 "r_mbytes_per_sec": 0, 00:08:37.533 "w_mbytes_per_sec": 0 00:08:37.533 }, 00:08:37.533 "claimed": true, 00:08:37.533 "claim_type": "exclusive_write", 00:08:37.533 "zoned": false, 00:08:37.533 "supported_io_types": { 00:08:37.533 "read": true, 00:08:37.533 "write": true, 00:08:37.533 "unmap": true, 00:08:37.533 "flush": true, 00:08:37.533 "reset": true, 00:08:37.533 "nvme_admin": false, 00:08:37.533 "nvme_io": false, 00:08:37.533 "nvme_io_md": false, 00:08:37.533 "write_zeroes": true, 00:08:37.533 "zcopy": true, 00:08:37.533 "get_zone_info": false, 00:08:37.533 "zone_management": false, 00:08:37.533 "zone_append": false, 00:08:37.533 "compare": false, 00:08:37.533 "compare_and_write": false, 00:08:37.533 "abort": true, 00:08:37.533 "seek_hole": false, 00:08:37.533 "seek_data": false, 00:08:37.533 "copy": true, 00:08:37.533 "nvme_iov_md": false 00:08:37.533 }, 00:08:37.533 "memory_domains": [ 00:08:37.533 { 00:08:37.533 "dma_device_id": "system", 00:08:37.533 "dma_device_type": 1 00:08:37.533 }, 00:08:37.533 { 00:08:37.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.533 "dma_device_type": 2 00:08:37.533 } 00:08:37.533 ], 00:08:37.533 "driver_specific": {} 00:08:37.533 } 00:08:37.533 ] 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.533 "name": "Existed_Raid", 00:08:37.533 "uuid": "039ce91c-d4a0-4771-b868-0a70ef541787", 00:08:37.533 "strip_size_kb": 64, 00:08:37.533 "state": "configuring", 00:08:37.533 "raid_level": "raid0", 00:08:37.533 "superblock": true, 00:08:37.533 "num_base_bdevs": 3, 00:08:37.533 "num_base_bdevs_discovered": 1, 00:08:37.533 "num_base_bdevs_operational": 3, 00:08:37.533 "base_bdevs_list": [ 00:08:37.533 { 00:08:37.533 "name": "BaseBdev1", 00:08:37.533 "uuid": "1d342f21-74bb-4afe-afb2-a1bdbcf8384a", 00:08:37.533 "is_configured": true, 00:08:37.533 "data_offset": 2048, 00:08:37.533 "data_size": 63488 00:08:37.533 }, 00:08:37.533 { 00:08:37.533 "name": "BaseBdev2", 00:08:37.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.533 "is_configured": false, 00:08:37.533 "data_offset": 0, 00:08:37.533 "data_size": 0 00:08:37.533 }, 00:08:37.533 { 00:08:37.533 "name": "BaseBdev3", 00:08:37.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.533 "is_configured": false, 00:08:37.533 "data_offset": 0, 00:08:37.533 "data_size": 0 00:08:37.533 } 00:08:37.533 ] 00:08:37.533 }' 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.533 08:45:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.793 [2024-10-05 08:45:14.225144] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.793 [2024-10-05 08:45:14.225218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.793 [2024-10-05 08:45:14.237140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.793 [2024-10-05 08:45:14.239291] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.793 [2024-10-05 08:45:14.239334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.793 [2024-10-05 08:45:14.239345] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.793 [2024-10-05 08:45:14.239354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.793 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.794 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.794 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.794 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.794 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.794 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.794 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.794 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.794 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.794 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.054 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.054 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.054 "name": "Existed_Raid", 00:08:38.054 "uuid": "9899e8ee-85ce-42d1-88c8-e65523e1ab66", 00:08:38.054 "strip_size_kb": 64, 00:08:38.054 "state": "configuring", 00:08:38.054 "raid_level": "raid0", 00:08:38.054 "superblock": true, 00:08:38.054 "num_base_bdevs": 3, 00:08:38.054 "num_base_bdevs_discovered": 1, 00:08:38.054 "num_base_bdevs_operational": 3, 00:08:38.054 "base_bdevs_list": [ 00:08:38.054 { 00:08:38.054 "name": "BaseBdev1", 00:08:38.054 "uuid": "1d342f21-74bb-4afe-afb2-a1bdbcf8384a", 00:08:38.054 "is_configured": true, 00:08:38.054 "data_offset": 2048, 00:08:38.054 "data_size": 63488 00:08:38.054 }, 00:08:38.054 { 00:08:38.054 "name": "BaseBdev2", 00:08:38.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.054 "is_configured": false, 00:08:38.054 "data_offset": 0, 00:08:38.054 "data_size": 0 00:08:38.054 }, 00:08:38.054 { 00:08:38.054 "name": "BaseBdev3", 00:08:38.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.054 "is_configured": false, 00:08:38.054 "data_offset": 0, 00:08:38.054 "data_size": 0 00:08:38.054 } 00:08:38.054 ] 00:08:38.054 }' 00:08:38.054 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.054 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.314 [2024-10-05 08:45:14.764032] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.314 BaseBdev2 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.314 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.573 [ 00:08:38.573 { 00:08:38.573 "name": "BaseBdev2", 00:08:38.573 "aliases": [ 00:08:38.573 "ffaaa83f-58b5-42e2-83e1-ae3b11e15b76" 00:08:38.573 ], 00:08:38.573 "product_name": "Malloc disk", 00:08:38.573 "block_size": 512, 00:08:38.573 "num_blocks": 65536, 00:08:38.573 "uuid": "ffaaa83f-58b5-42e2-83e1-ae3b11e15b76", 00:08:38.573 "assigned_rate_limits": { 00:08:38.573 "rw_ios_per_sec": 0, 00:08:38.573 "rw_mbytes_per_sec": 0, 00:08:38.573 "r_mbytes_per_sec": 0, 00:08:38.573 "w_mbytes_per_sec": 0 00:08:38.573 }, 00:08:38.573 "claimed": true, 00:08:38.573 "claim_type": "exclusive_write", 00:08:38.573 "zoned": false, 00:08:38.573 "supported_io_types": { 00:08:38.573 "read": true, 00:08:38.573 "write": true, 00:08:38.573 "unmap": true, 00:08:38.573 "flush": true, 00:08:38.573 "reset": true, 00:08:38.573 "nvme_admin": false, 00:08:38.573 "nvme_io": false, 00:08:38.573 "nvme_io_md": false, 00:08:38.573 "write_zeroes": true, 00:08:38.573 "zcopy": true, 00:08:38.573 "get_zone_info": false, 00:08:38.573 "zone_management": false, 00:08:38.573 "zone_append": false, 00:08:38.573 "compare": false, 00:08:38.573 "compare_and_write": false, 00:08:38.573 "abort": true, 00:08:38.573 "seek_hole": false, 00:08:38.573 "seek_data": false, 00:08:38.573 "copy": true, 00:08:38.573 "nvme_iov_md": false 00:08:38.573 }, 00:08:38.573 "memory_domains": [ 00:08:38.573 { 00:08:38.573 "dma_device_id": "system", 00:08:38.573 "dma_device_type": 1 00:08:38.573 }, 00:08:38.573 { 00:08:38.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.573 "dma_device_type": 2 00:08:38.573 } 00:08:38.573 ], 00:08:38.574 "driver_specific": {} 00:08:38.574 } 00:08:38.574 ] 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.574 "name": "Existed_Raid", 00:08:38.574 "uuid": "9899e8ee-85ce-42d1-88c8-e65523e1ab66", 00:08:38.574 "strip_size_kb": 64, 00:08:38.574 "state": "configuring", 00:08:38.574 "raid_level": "raid0", 00:08:38.574 "superblock": true, 00:08:38.574 "num_base_bdevs": 3, 00:08:38.574 "num_base_bdevs_discovered": 2, 00:08:38.574 "num_base_bdevs_operational": 3, 00:08:38.574 "base_bdevs_list": [ 00:08:38.574 { 00:08:38.574 "name": "BaseBdev1", 00:08:38.574 "uuid": "1d342f21-74bb-4afe-afb2-a1bdbcf8384a", 00:08:38.574 "is_configured": true, 00:08:38.574 "data_offset": 2048, 00:08:38.574 "data_size": 63488 00:08:38.574 }, 00:08:38.574 { 00:08:38.574 "name": "BaseBdev2", 00:08:38.574 "uuid": "ffaaa83f-58b5-42e2-83e1-ae3b11e15b76", 00:08:38.574 "is_configured": true, 00:08:38.574 "data_offset": 2048, 00:08:38.574 "data_size": 63488 00:08:38.574 }, 00:08:38.574 { 00:08:38.574 "name": "BaseBdev3", 00:08:38.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.574 "is_configured": false, 00:08:38.574 "data_offset": 0, 00:08:38.574 "data_size": 0 00:08:38.574 } 00:08:38.574 ] 00:08:38.574 }' 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.574 08:45:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.833 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:38.833 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.833 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.096 [2024-10-05 08:45:15.306417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.096 [2024-10-05 08:45:15.306686] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:39.096 [2024-10-05 08:45:15.306714] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.096 [2024-10-05 08:45:15.307004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:39.096 [2024-10-05 08:45:15.307166] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:39.096 [2024-10-05 08:45:15.307175] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:39.096 BaseBdev3 00:08:39.096 [2024-10-05 08:45:15.307327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.096 [ 00:08:39.096 { 00:08:39.096 "name": "BaseBdev3", 00:08:39.096 "aliases": [ 00:08:39.096 "42b2c4af-a13f-44bc-9bd3-c9ec4c2f04c9" 00:08:39.096 ], 00:08:39.096 "product_name": "Malloc disk", 00:08:39.096 "block_size": 512, 00:08:39.096 "num_blocks": 65536, 00:08:39.096 "uuid": "42b2c4af-a13f-44bc-9bd3-c9ec4c2f04c9", 00:08:39.096 "assigned_rate_limits": { 00:08:39.096 "rw_ios_per_sec": 0, 00:08:39.096 "rw_mbytes_per_sec": 0, 00:08:39.096 "r_mbytes_per_sec": 0, 00:08:39.096 "w_mbytes_per_sec": 0 00:08:39.096 }, 00:08:39.096 "claimed": true, 00:08:39.096 "claim_type": "exclusive_write", 00:08:39.096 "zoned": false, 00:08:39.096 "supported_io_types": { 00:08:39.096 "read": true, 00:08:39.096 "write": true, 00:08:39.096 "unmap": true, 00:08:39.096 "flush": true, 00:08:39.096 "reset": true, 00:08:39.096 "nvme_admin": false, 00:08:39.096 "nvme_io": false, 00:08:39.096 "nvme_io_md": false, 00:08:39.096 "write_zeroes": true, 00:08:39.096 "zcopy": true, 00:08:39.096 "get_zone_info": false, 00:08:39.096 "zone_management": false, 00:08:39.096 "zone_append": false, 00:08:39.096 "compare": false, 00:08:39.096 "compare_and_write": false, 00:08:39.096 "abort": true, 00:08:39.096 "seek_hole": false, 00:08:39.096 "seek_data": false, 00:08:39.096 "copy": true, 00:08:39.096 "nvme_iov_md": false 00:08:39.096 }, 00:08:39.096 "memory_domains": [ 00:08:39.096 { 00:08:39.096 "dma_device_id": "system", 00:08:39.096 "dma_device_type": 1 00:08:39.096 }, 00:08:39.096 { 00:08:39.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.096 "dma_device_type": 2 00:08:39.096 } 00:08:39.096 ], 00:08:39.096 "driver_specific": {} 00:08:39.096 } 00:08:39.096 ] 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.096 "name": "Existed_Raid", 00:08:39.096 "uuid": "9899e8ee-85ce-42d1-88c8-e65523e1ab66", 00:08:39.096 "strip_size_kb": 64, 00:08:39.096 "state": "online", 00:08:39.096 "raid_level": "raid0", 00:08:39.096 "superblock": true, 00:08:39.096 "num_base_bdevs": 3, 00:08:39.096 "num_base_bdevs_discovered": 3, 00:08:39.096 "num_base_bdevs_operational": 3, 00:08:39.096 "base_bdevs_list": [ 00:08:39.096 { 00:08:39.096 "name": "BaseBdev1", 00:08:39.096 "uuid": "1d342f21-74bb-4afe-afb2-a1bdbcf8384a", 00:08:39.096 "is_configured": true, 00:08:39.096 "data_offset": 2048, 00:08:39.096 "data_size": 63488 00:08:39.096 }, 00:08:39.096 { 00:08:39.096 "name": "BaseBdev2", 00:08:39.096 "uuid": "ffaaa83f-58b5-42e2-83e1-ae3b11e15b76", 00:08:39.096 "is_configured": true, 00:08:39.096 "data_offset": 2048, 00:08:39.096 "data_size": 63488 00:08:39.096 }, 00:08:39.096 { 00:08:39.096 "name": "BaseBdev3", 00:08:39.096 "uuid": "42b2c4af-a13f-44bc-9bd3-c9ec4c2f04c9", 00:08:39.096 "is_configured": true, 00:08:39.096 "data_offset": 2048, 00:08:39.096 "data_size": 63488 00:08:39.096 } 00:08:39.096 ] 00:08:39.096 }' 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.096 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.365 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:39.365 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:39.365 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.365 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.365 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.365 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.365 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:39.365 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.365 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.365 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.365 [2024-10-05 08:45:15.734048] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.366 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.366 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.366 "name": "Existed_Raid", 00:08:39.366 "aliases": [ 00:08:39.366 "9899e8ee-85ce-42d1-88c8-e65523e1ab66" 00:08:39.366 ], 00:08:39.366 "product_name": "Raid Volume", 00:08:39.366 "block_size": 512, 00:08:39.366 "num_blocks": 190464, 00:08:39.366 "uuid": "9899e8ee-85ce-42d1-88c8-e65523e1ab66", 00:08:39.366 "assigned_rate_limits": { 00:08:39.366 "rw_ios_per_sec": 0, 00:08:39.366 "rw_mbytes_per_sec": 0, 00:08:39.366 "r_mbytes_per_sec": 0, 00:08:39.366 "w_mbytes_per_sec": 0 00:08:39.366 }, 00:08:39.366 "claimed": false, 00:08:39.366 "zoned": false, 00:08:39.366 "supported_io_types": { 00:08:39.366 "read": true, 00:08:39.366 "write": true, 00:08:39.366 "unmap": true, 00:08:39.366 "flush": true, 00:08:39.366 "reset": true, 00:08:39.366 "nvme_admin": false, 00:08:39.366 "nvme_io": false, 00:08:39.366 "nvme_io_md": false, 00:08:39.366 "write_zeroes": true, 00:08:39.366 "zcopy": false, 00:08:39.366 "get_zone_info": false, 00:08:39.366 "zone_management": false, 00:08:39.366 "zone_append": false, 00:08:39.366 "compare": false, 00:08:39.366 "compare_and_write": false, 00:08:39.366 "abort": false, 00:08:39.366 "seek_hole": false, 00:08:39.366 "seek_data": false, 00:08:39.366 "copy": false, 00:08:39.366 "nvme_iov_md": false 00:08:39.366 }, 00:08:39.366 "memory_domains": [ 00:08:39.366 { 00:08:39.366 "dma_device_id": "system", 00:08:39.366 "dma_device_type": 1 00:08:39.366 }, 00:08:39.366 { 00:08:39.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.366 "dma_device_type": 2 00:08:39.366 }, 00:08:39.366 { 00:08:39.366 "dma_device_id": "system", 00:08:39.366 "dma_device_type": 1 00:08:39.366 }, 00:08:39.366 { 00:08:39.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.366 "dma_device_type": 2 00:08:39.366 }, 00:08:39.366 { 00:08:39.366 "dma_device_id": "system", 00:08:39.366 "dma_device_type": 1 00:08:39.366 }, 00:08:39.366 { 00:08:39.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.366 "dma_device_type": 2 00:08:39.366 } 00:08:39.366 ], 00:08:39.366 "driver_specific": { 00:08:39.366 "raid": { 00:08:39.366 "uuid": "9899e8ee-85ce-42d1-88c8-e65523e1ab66", 00:08:39.366 "strip_size_kb": 64, 00:08:39.366 "state": "online", 00:08:39.366 "raid_level": "raid0", 00:08:39.366 "superblock": true, 00:08:39.366 "num_base_bdevs": 3, 00:08:39.366 "num_base_bdevs_discovered": 3, 00:08:39.366 "num_base_bdevs_operational": 3, 00:08:39.366 "base_bdevs_list": [ 00:08:39.366 { 00:08:39.366 "name": "BaseBdev1", 00:08:39.366 "uuid": "1d342f21-74bb-4afe-afb2-a1bdbcf8384a", 00:08:39.366 "is_configured": true, 00:08:39.366 "data_offset": 2048, 00:08:39.366 "data_size": 63488 00:08:39.366 }, 00:08:39.366 { 00:08:39.366 "name": "BaseBdev2", 00:08:39.366 "uuid": "ffaaa83f-58b5-42e2-83e1-ae3b11e15b76", 00:08:39.366 "is_configured": true, 00:08:39.366 "data_offset": 2048, 00:08:39.366 "data_size": 63488 00:08:39.366 }, 00:08:39.366 { 00:08:39.366 "name": "BaseBdev3", 00:08:39.366 "uuid": "42b2c4af-a13f-44bc-9bd3-c9ec4c2f04c9", 00:08:39.366 "is_configured": true, 00:08:39.366 "data_offset": 2048, 00:08:39.366 "data_size": 63488 00:08:39.366 } 00:08:39.366 ] 00:08:39.366 } 00:08:39.366 } 00:08:39.366 }' 00:08:39.366 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.366 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:39.366 BaseBdev2 00:08:39.366 BaseBdev3' 00:08:39.366 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.626 08:45:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.626 [2024-10-05 08:45:15.981300] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.626 [2024-10-05 08:45:15.981326] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.626 [2024-10-05 08:45:15.981382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.626 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.627 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.627 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.627 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.627 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.627 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.886 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.886 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.886 "name": "Existed_Raid", 00:08:39.886 "uuid": "9899e8ee-85ce-42d1-88c8-e65523e1ab66", 00:08:39.886 "strip_size_kb": 64, 00:08:39.886 "state": "offline", 00:08:39.886 "raid_level": "raid0", 00:08:39.886 "superblock": true, 00:08:39.886 "num_base_bdevs": 3, 00:08:39.886 "num_base_bdevs_discovered": 2, 00:08:39.886 "num_base_bdevs_operational": 2, 00:08:39.886 "base_bdevs_list": [ 00:08:39.886 { 00:08:39.886 "name": null, 00:08:39.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.886 "is_configured": false, 00:08:39.886 "data_offset": 0, 00:08:39.886 "data_size": 63488 00:08:39.886 }, 00:08:39.886 { 00:08:39.886 "name": "BaseBdev2", 00:08:39.886 "uuid": "ffaaa83f-58b5-42e2-83e1-ae3b11e15b76", 00:08:39.886 "is_configured": true, 00:08:39.886 "data_offset": 2048, 00:08:39.886 "data_size": 63488 00:08:39.886 }, 00:08:39.886 { 00:08:39.886 "name": "BaseBdev3", 00:08:39.886 "uuid": "42b2c4af-a13f-44bc-9bd3-c9ec4c2f04c9", 00:08:39.886 "is_configured": true, 00:08:39.886 "data_offset": 2048, 00:08:39.886 "data_size": 63488 00:08:39.886 } 00:08:39.886 ] 00:08:39.886 }' 00:08:39.886 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.886 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.146 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:40.146 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.146 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.146 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.146 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.146 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.146 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.146 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.146 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.146 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:40.146 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.146 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.146 [2024-10-05 08:45:16.535410] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.406 [2024-10-05 08:45:16.699721] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:40.406 [2024-10-05 08:45:16.699846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.406 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.666 BaseBdev2 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.666 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.666 [ 00:08:40.666 { 00:08:40.666 "name": "BaseBdev2", 00:08:40.666 "aliases": [ 00:08:40.666 "350323dc-18a0-46de-a26b-6095ffffd61e" 00:08:40.666 ], 00:08:40.666 "product_name": "Malloc disk", 00:08:40.666 "block_size": 512, 00:08:40.666 "num_blocks": 65536, 00:08:40.666 "uuid": "350323dc-18a0-46de-a26b-6095ffffd61e", 00:08:40.666 "assigned_rate_limits": { 00:08:40.666 "rw_ios_per_sec": 0, 00:08:40.666 "rw_mbytes_per_sec": 0, 00:08:40.666 "r_mbytes_per_sec": 0, 00:08:40.666 "w_mbytes_per_sec": 0 00:08:40.666 }, 00:08:40.666 "claimed": false, 00:08:40.666 "zoned": false, 00:08:40.666 "supported_io_types": { 00:08:40.666 "read": true, 00:08:40.666 "write": true, 00:08:40.666 "unmap": true, 00:08:40.666 "flush": true, 00:08:40.666 "reset": true, 00:08:40.666 "nvme_admin": false, 00:08:40.666 "nvme_io": false, 00:08:40.666 "nvme_io_md": false, 00:08:40.666 "write_zeroes": true, 00:08:40.666 "zcopy": true, 00:08:40.666 "get_zone_info": false, 00:08:40.666 "zone_management": false, 00:08:40.666 "zone_append": false, 00:08:40.666 "compare": false, 00:08:40.666 "compare_and_write": false, 00:08:40.666 "abort": true, 00:08:40.666 "seek_hole": false, 00:08:40.667 "seek_data": false, 00:08:40.667 "copy": true, 00:08:40.667 "nvme_iov_md": false 00:08:40.667 }, 00:08:40.667 "memory_domains": [ 00:08:40.667 { 00:08:40.667 "dma_device_id": "system", 00:08:40.667 "dma_device_type": 1 00:08:40.667 }, 00:08:40.667 { 00:08:40.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.667 "dma_device_type": 2 00:08:40.667 } 00:08:40.667 ], 00:08:40.667 "driver_specific": {} 00:08:40.667 } 00:08:40.667 ] 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.667 BaseBdev3 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.667 08:45:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.667 [ 00:08:40.667 { 00:08:40.667 "name": "BaseBdev3", 00:08:40.667 "aliases": [ 00:08:40.667 "fdd1c8be-72b1-4fab-93c4-705a256dfbe1" 00:08:40.667 ], 00:08:40.667 "product_name": "Malloc disk", 00:08:40.667 "block_size": 512, 00:08:40.667 "num_blocks": 65536, 00:08:40.667 "uuid": "fdd1c8be-72b1-4fab-93c4-705a256dfbe1", 00:08:40.667 "assigned_rate_limits": { 00:08:40.667 "rw_ios_per_sec": 0, 00:08:40.667 "rw_mbytes_per_sec": 0, 00:08:40.667 "r_mbytes_per_sec": 0, 00:08:40.667 "w_mbytes_per_sec": 0 00:08:40.667 }, 00:08:40.667 "claimed": false, 00:08:40.667 "zoned": false, 00:08:40.667 "supported_io_types": { 00:08:40.667 "read": true, 00:08:40.667 "write": true, 00:08:40.667 "unmap": true, 00:08:40.667 "flush": true, 00:08:40.667 "reset": true, 00:08:40.667 "nvme_admin": false, 00:08:40.667 "nvme_io": false, 00:08:40.667 "nvme_io_md": false, 00:08:40.667 "write_zeroes": true, 00:08:40.667 "zcopy": true, 00:08:40.667 "get_zone_info": false, 00:08:40.667 "zone_management": false, 00:08:40.667 "zone_append": false, 00:08:40.667 "compare": false, 00:08:40.667 "compare_and_write": false, 00:08:40.667 "abort": true, 00:08:40.667 "seek_hole": false, 00:08:40.667 "seek_data": false, 00:08:40.667 "copy": true, 00:08:40.667 "nvme_iov_md": false 00:08:40.667 }, 00:08:40.667 "memory_domains": [ 00:08:40.667 { 00:08:40.667 "dma_device_id": "system", 00:08:40.667 "dma_device_type": 1 00:08:40.667 }, 00:08:40.667 { 00:08:40.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.667 "dma_device_type": 2 00:08:40.667 } 00:08:40.667 ], 00:08:40.667 "driver_specific": {} 00:08:40.667 } 00:08:40.667 ] 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.667 [2024-10-05 08:45:17.011281] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.667 [2024-10-05 08:45:17.011368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.667 [2024-10-05 08:45:17.011409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.667 [2024-10-05 08:45:17.013430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.667 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.668 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.668 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.668 "name": "Existed_Raid", 00:08:40.668 "uuid": "8e3dde1a-1226-4121-aa7e-b64e4bf0a8dd", 00:08:40.668 "strip_size_kb": 64, 00:08:40.668 "state": "configuring", 00:08:40.668 "raid_level": "raid0", 00:08:40.668 "superblock": true, 00:08:40.668 "num_base_bdevs": 3, 00:08:40.668 "num_base_bdevs_discovered": 2, 00:08:40.668 "num_base_bdevs_operational": 3, 00:08:40.668 "base_bdevs_list": [ 00:08:40.668 { 00:08:40.668 "name": "BaseBdev1", 00:08:40.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.668 "is_configured": false, 00:08:40.668 "data_offset": 0, 00:08:40.668 "data_size": 0 00:08:40.668 }, 00:08:40.668 { 00:08:40.668 "name": "BaseBdev2", 00:08:40.668 "uuid": "350323dc-18a0-46de-a26b-6095ffffd61e", 00:08:40.668 "is_configured": true, 00:08:40.668 "data_offset": 2048, 00:08:40.668 "data_size": 63488 00:08:40.668 }, 00:08:40.668 { 00:08:40.668 "name": "BaseBdev3", 00:08:40.668 "uuid": "fdd1c8be-72b1-4fab-93c4-705a256dfbe1", 00:08:40.668 "is_configured": true, 00:08:40.668 "data_offset": 2048, 00:08:40.668 "data_size": 63488 00:08:40.668 } 00:08:40.668 ] 00:08:40.668 }' 00:08:40.668 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.668 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.236 [2024-10-05 08:45:17.474470] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.236 "name": "Existed_Raid", 00:08:41.236 "uuid": "8e3dde1a-1226-4121-aa7e-b64e4bf0a8dd", 00:08:41.236 "strip_size_kb": 64, 00:08:41.236 "state": "configuring", 00:08:41.236 "raid_level": "raid0", 00:08:41.236 "superblock": true, 00:08:41.236 "num_base_bdevs": 3, 00:08:41.236 "num_base_bdevs_discovered": 1, 00:08:41.236 "num_base_bdevs_operational": 3, 00:08:41.236 "base_bdevs_list": [ 00:08:41.236 { 00:08:41.236 "name": "BaseBdev1", 00:08:41.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.236 "is_configured": false, 00:08:41.236 "data_offset": 0, 00:08:41.236 "data_size": 0 00:08:41.236 }, 00:08:41.236 { 00:08:41.236 "name": null, 00:08:41.236 "uuid": "350323dc-18a0-46de-a26b-6095ffffd61e", 00:08:41.236 "is_configured": false, 00:08:41.236 "data_offset": 0, 00:08:41.236 "data_size": 63488 00:08:41.236 }, 00:08:41.236 { 00:08:41.236 "name": "BaseBdev3", 00:08:41.236 "uuid": "fdd1c8be-72b1-4fab-93c4-705a256dfbe1", 00:08:41.236 "is_configured": true, 00:08:41.236 "data_offset": 2048, 00:08:41.236 "data_size": 63488 00:08:41.236 } 00:08:41.236 ] 00:08:41.236 }' 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.236 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.494 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:41.494 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.494 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.494 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.494 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.494 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:41.494 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:41.494 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.494 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.752 [2024-10-05 08:45:17.984896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.752 BaseBdev1 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.752 08:45:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.752 [ 00:08:41.752 { 00:08:41.752 "name": "BaseBdev1", 00:08:41.752 "aliases": [ 00:08:41.752 "24ce9e7f-fea8-43ca-b16e-8d6535a411d5" 00:08:41.752 ], 00:08:41.752 "product_name": "Malloc disk", 00:08:41.752 "block_size": 512, 00:08:41.752 "num_blocks": 65536, 00:08:41.752 "uuid": "24ce9e7f-fea8-43ca-b16e-8d6535a411d5", 00:08:41.752 "assigned_rate_limits": { 00:08:41.752 "rw_ios_per_sec": 0, 00:08:41.752 "rw_mbytes_per_sec": 0, 00:08:41.752 "r_mbytes_per_sec": 0, 00:08:41.752 "w_mbytes_per_sec": 0 00:08:41.752 }, 00:08:41.752 "claimed": true, 00:08:41.752 "claim_type": "exclusive_write", 00:08:41.752 "zoned": false, 00:08:41.752 "supported_io_types": { 00:08:41.752 "read": true, 00:08:41.752 "write": true, 00:08:41.752 "unmap": true, 00:08:41.752 "flush": true, 00:08:41.752 "reset": true, 00:08:41.752 "nvme_admin": false, 00:08:41.752 "nvme_io": false, 00:08:41.752 "nvme_io_md": false, 00:08:41.752 "write_zeroes": true, 00:08:41.752 "zcopy": true, 00:08:41.752 "get_zone_info": false, 00:08:41.752 "zone_management": false, 00:08:41.752 "zone_append": false, 00:08:41.752 "compare": false, 00:08:41.752 "compare_and_write": false, 00:08:41.752 "abort": true, 00:08:41.752 "seek_hole": false, 00:08:41.752 "seek_data": false, 00:08:41.752 "copy": true, 00:08:41.752 "nvme_iov_md": false 00:08:41.752 }, 00:08:41.752 "memory_domains": [ 00:08:41.752 { 00:08:41.752 "dma_device_id": "system", 00:08:41.752 "dma_device_type": 1 00:08:41.752 }, 00:08:41.752 { 00:08:41.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.752 "dma_device_type": 2 00:08:41.752 } 00:08:41.752 ], 00:08:41.752 "driver_specific": {} 00:08:41.752 } 00:08:41.752 ] 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.752 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.753 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.753 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.753 "name": "Existed_Raid", 00:08:41.753 "uuid": "8e3dde1a-1226-4121-aa7e-b64e4bf0a8dd", 00:08:41.753 "strip_size_kb": 64, 00:08:41.753 "state": "configuring", 00:08:41.753 "raid_level": "raid0", 00:08:41.753 "superblock": true, 00:08:41.753 "num_base_bdevs": 3, 00:08:41.753 "num_base_bdevs_discovered": 2, 00:08:41.753 "num_base_bdevs_operational": 3, 00:08:41.753 "base_bdevs_list": [ 00:08:41.753 { 00:08:41.753 "name": "BaseBdev1", 00:08:41.753 "uuid": "24ce9e7f-fea8-43ca-b16e-8d6535a411d5", 00:08:41.753 "is_configured": true, 00:08:41.753 "data_offset": 2048, 00:08:41.753 "data_size": 63488 00:08:41.753 }, 00:08:41.753 { 00:08:41.753 "name": null, 00:08:41.753 "uuid": "350323dc-18a0-46de-a26b-6095ffffd61e", 00:08:41.753 "is_configured": false, 00:08:41.753 "data_offset": 0, 00:08:41.753 "data_size": 63488 00:08:41.753 }, 00:08:41.753 { 00:08:41.753 "name": "BaseBdev3", 00:08:41.753 "uuid": "fdd1c8be-72b1-4fab-93c4-705a256dfbe1", 00:08:41.753 "is_configured": true, 00:08:41.753 "data_offset": 2048, 00:08:41.753 "data_size": 63488 00:08:41.753 } 00:08:41.753 ] 00:08:41.753 }' 00:08:41.753 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.753 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.010 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:42.010 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.010 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.010 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.010 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.269 [2024-10-05 08:45:18.508060] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.269 "name": "Existed_Raid", 00:08:42.269 "uuid": "8e3dde1a-1226-4121-aa7e-b64e4bf0a8dd", 00:08:42.269 "strip_size_kb": 64, 00:08:42.269 "state": "configuring", 00:08:42.269 "raid_level": "raid0", 00:08:42.269 "superblock": true, 00:08:42.269 "num_base_bdevs": 3, 00:08:42.269 "num_base_bdevs_discovered": 1, 00:08:42.269 "num_base_bdevs_operational": 3, 00:08:42.269 "base_bdevs_list": [ 00:08:42.269 { 00:08:42.269 "name": "BaseBdev1", 00:08:42.269 "uuid": "24ce9e7f-fea8-43ca-b16e-8d6535a411d5", 00:08:42.269 "is_configured": true, 00:08:42.269 "data_offset": 2048, 00:08:42.269 "data_size": 63488 00:08:42.269 }, 00:08:42.269 { 00:08:42.269 "name": null, 00:08:42.269 "uuid": "350323dc-18a0-46de-a26b-6095ffffd61e", 00:08:42.269 "is_configured": false, 00:08:42.269 "data_offset": 0, 00:08:42.269 "data_size": 63488 00:08:42.269 }, 00:08:42.269 { 00:08:42.269 "name": null, 00:08:42.269 "uuid": "fdd1c8be-72b1-4fab-93c4-705a256dfbe1", 00:08:42.269 "is_configured": false, 00:08:42.269 "data_offset": 0, 00:08:42.269 "data_size": 63488 00:08:42.269 } 00:08:42.269 ] 00:08:42.269 }' 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.269 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.526 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.526 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:42.526 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.526 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.526 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.526 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:42.526 08:45:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:42.526 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.526 08:45:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.783 [2024-10-05 08:45:18.999236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.783 "name": "Existed_Raid", 00:08:42.783 "uuid": "8e3dde1a-1226-4121-aa7e-b64e4bf0a8dd", 00:08:42.783 "strip_size_kb": 64, 00:08:42.783 "state": "configuring", 00:08:42.783 "raid_level": "raid0", 00:08:42.783 "superblock": true, 00:08:42.783 "num_base_bdevs": 3, 00:08:42.783 "num_base_bdevs_discovered": 2, 00:08:42.783 "num_base_bdevs_operational": 3, 00:08:42.783 "base_bdevs_list": [ 00:08:42.783 { 00:08:42.783 "name": "BaseBdev1", 00:08:42.783 "uuid": "24ce9e7f-fea8-43ca-b16e-8d6535a411d5", 00:08:42.783 "is_configured": true, 00:08:42.783 "data_offset": 2048, 00:08:42.783 "data_size": 63488 00:08:42.783 }, 00:08:42.783 { 00:08:42.783 "name": null, 00:08:42.783 "uuid": "350323dc-18a0-46de-a26b-6095ffffd61e", 00:08:42.783 "is_configured": false, 00:08:42.783 "data_offset": 0, 00:08:42.783 "data_size": 63488 00:08:42.783 }, 00:08:42.783 { 00:08:42.783 "name": "BaseBdev3", 00:08:42.783 "uuid": "fdd1c8be-72b1-4fab-93c4-705a256dfbe1", 00:08:42.783 "is_configured": true, 00:08:42.783 "data_offset": 2048, 00:08:42.783 "data_size": 63488 00:08:42.783 } 00:08:42.783 ] 00:08:42.783 }' 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.783 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.042 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:43.042 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.042 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.042 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.042 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.042 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:43.042 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:43.042 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.042 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.042 [2024-10-05 08:45:19.454507] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.301 "name": "Existed_Raid", 00:08:43.301 "uuid": "8e3dde1a-1226-4121-aa7e-b64e4bf0a8dd", 00:08:43.301 "strip_size_kb": 64, 00:08:43.301 "state": "configuring", 00:08:43.301 "raid_level": "raid0", 00:08:43.301 "superblock": true, 00:08:43.301 "num_base_bdevs": 3, 00:08:43.301 "num_base_bdevs_discovered": 1, 00:08:43.301 "num_base_bdevs_operational": 3, 00:08:43.301 "base_bdevs_list": [ 00:08:43.301 { 00:08:43.301 "name": null, 00:08:43.301 "uuid": "24ce9e7f-fea8-43ca-b16e-8d6535a411d5", 00:08:43.301 "is_configured": false, 00:08:43.301 "data_offset": 0, 00:08:43.301 "data_size": 63488 00:08:43.301 }, 00:08:43.301 { 00:08:43.301 "name": null, 00:08:43.301 "uuid": "350323dc-18a0-46de-a26b-6095ffffd61e", 00:08:43.301 "is_configured": false, 00:08:43.301 "data_offset": 0, 00:08:43.301 "data_size": 63488 00:08:43.301 }, 00:08:43.301 { 00:08:43.301 "name": "BaseBdev3", 00:08:43.301 "uuid": "fdd1c8be-72b1-4fab-93c4-705a256dfbe1", 00:08:43.301 "is_configured": true, 00:08:43.301 "data_offset": 2048, 00:08:43.301 "data_size": 63488 00:08:43.301 } 00:08:43.301 ] 00:08:43.301 }' 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.301 08:45:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.559 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:43.559 08:45:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.559 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.559 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.559 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.559 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:43.559 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:43.559 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.559 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.560 [2024-10-05 08:45:20.027431] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.819 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.819 "name": "Existed_Raid", 00:08:43.819 "uuid": "8e3dde1a-1226-4121-aa7e-b64e4bf0a8dd", 00:08:43.819 "strip_size_kb": 64, 00:08:43.819 "state": "configuring", 00:08:43.819 "raid_level": "raid0", 00:08:43.819 "superblock": true, 00:08:43.819 "num_base_bdevs": 3, 00:08:43.819 "num_base_bdevs_discovered": 2, 00:08:43.819 "num_base_bdevs_operational": 3, 00:08:43.819 "base_bdevs_list": [ 00:08:43.819 { 00:08:43.819 "name": null, 00:08:43.820 "uuid": "24ce9e7f-fea8-43ca-b16e-8d6535a411d5", 00:08:43.820 "is_configured": false, 00:08:43.820 "data_offset": 0, 00:08:43.820 "data_size": 63488 00:08:43.820 }, 00:08:43.820 { 00:08:43.820 "name": "BaseBdev2", 00:08:43.820 "uuid": "350323dc-18a0-46de-a26b-6095ffffd61e", 00:08:43.820 "is_configured": true, 00:08:43.820 "data_offset": 2048, 00:08:43.820 "data_size": 63488 00:08:43.820 }, 00:08:43.820 { 00:08:43.820 "name": "BaseBdev3", 00:08:43.820 "uuid": "fdd1c8be-72b1-4fab-93c4-705a256dfbe1", 00:08:43.820 "is_configured": true, 00:08:43.820 "data_offset": 2048, 00:08:43.820 "data_size": 63488 00:08:43.820 } 00:08:43.820 ] 00:08:43.820 }' 00:08:43.820 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.820 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 24ce9e7f-fea8-43ca-b16e-8d6535a411d5 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.079 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.339 [2024-10-05 08:45:20.588047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:44.339 [2024-10-05 08:45:20.588360] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:44.339 [2024-10-05 08:45:20.588420] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:44.339 [2024-10-05 08:45:20.588716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:44.339 [2024-10-05 08:45:20.588908] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:44.339 [2024-10-05 08:45:20.588946] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:44.339 NewBaseBdev 00:08:44.339 [2024-10-05 08:45:20.589143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.339 [ 00:08:44.339 { 00:08:44.339 "name": "NewBaseBdev", 00:08:44.339 "aliases": [ 00:08:44.339 "24ce9e7f-fea8-43ca-b16e-8d6535a411d5" 00:08:44.339 ], 00:08:44.339 "product_name": "Malloc disk", 00:08:44.339 "block_size": 512, 00:08:44.339 "num_blocks": 65536, 00:08:44.339 "uuid": "24ce9e7f-fea8-43ca-b16e-8d6535a411d5", 00:08:44.339 "assigned_rate_limits": { 00:08:44.339 "rw_ios_per_sec": 0, 00:08:44.339 "rw_mbytes_per_sec": 0, 00:08:44.339 "r_mbytes_per_sec": 0, 00:08:44.339 "w_mbytes_per_sec": 0 00:08:44.339 }, 00:08:44.339 "claimed": true, 00:08:44.339 "claim_type": "exclusive_write", 00:08:44.339 "zoned": false, 00:08:44.339 "supported_io_types": { 00:08:44.339 "read": true, 00:08:44.339 "write": true, 00:08:44.339 "unmap": true, 00:08:44.339 "flush": true, 00:08:44.339 "reset": true, 00:08:44.339 "nvme_admin": false, 00:08:44.339 "nvme_io": false, 00:08:44.339 "nvme_io_md": false, 00:08:44.339 "write_zeroes": true, 00:08:44.339 "zcopy": true, 00:08:44.339 "get_zone_info": false, 00:08:44.339 "zone_management": false, 00:08:44.339 "zone_append": false, 00:08:44.339 "compare": false, 00:08:44.339 "compare_and_write": false, 00:08:44.339 "abort": true, 00:08:44.339 "seek_hole": false, 00:08:44.339 "seek_data": false, 00:08:44.339 "copy": true, 00:08:44.339 "nvme_iov_md": false 00:08:44.339 }, 00:08:44.339 "memory_domains": [ 00:08:44.339 { 00:08:44.339 "dma_device_id": "system", 00:08:44.339 "dma_device_type": 1 00:08:44.339 }, 00:08:44.339 { 00:08:44.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.339 "dma_device_type": 2 00:08:44.339 } 00:08:44.339 ], 00:08:44.339 "driver_specific": {} 00:08:44.339 } 00:08:44.339 ] 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.339 "name": "Existed_Raid", 00:08:44.339 "uuid": "8e3dde1a-1226-4121-aa7e-b64e4bf0a8dd", 00:08:44.339 "strip_size_kb": 64, 00:08:44.339 "state": "online", 00:08:44.339 "raid_level": "raid0", 00:08:44.339 "superblock": true, 00:08:44.339 "num_base_bdevs": 3, 00:08:44.339 "num_base_bdevs_discovered": 3, 00:08:44.339 "num_base_bdevs_operational": 3, 00:08:44.339 "base_bdevs_list": [ 00:08:44.339 { 00:08:44.339 "name": "NewBaseBdev", 00:08:44.339 "uuid": "24ce9e7f-fea8-43ca-b16e-8d6535a411d5", 00:08:44.339 "is_configured": true, 00:08:44.339 "data_offset": 2048, 00:08:44.339 "data_size": 63488 00:08:44.339 }, 00:08:44.339 { 00:08:44.339 "name": "BaseBdev2", 00:08:44.339 "uuid": "350323dc-18a0-46de-a26b-6095ffffd61e", 00:08:44.339 "is_configured": true, 00:08:44.339 "data_offset": 2048, 00:08:44.339 "data_size": 63488 00:08:44.339 }, 00:08:44.339 { 00:08:44.339 "name": "BaseBdev3", 00:08:44.339 "uuid": "fdd1c8be-72b1-4fab-93c4-705a256dfbe1", 00:08:44.339 "is_configured": true, 00:08:44.339 "data_offset": 2048, 00:08:44.339 "data_size": 63488 00:08:44.339 } 00:08:44.339 ] 00:08:44.339 }' 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.339 08:45:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.909 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:44.909 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:44.909 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.909 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.909 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.909 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.909 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.909 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:44.909 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.909 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.909 [2024-10-05 08:45:21.103456] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.909 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.909 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.909 "name": "Existed_Raid", 00:08:44.909 "aliases": [ 00:08:44.909 "8e3dde1a-1226-4121-aa7e-b64e4bf0a8dd" 00:08:44.909 ], 00:08:44.909 "product_name": "Raid Volume", 00:08:44.909 "block_size": 512, 00:08:44.909 "num_blocks": 190464, 00:08:44.909 "uuid": "8e3dde1a-1226-4121-aa7e-b64e4bf0a8dd", 00:08:44.909 "assigned_rate_limits": { 00:08:44.909 "rw_ios_per_sec": 0, 00:08:44.909 "rw_mbytes_per_sec": 0, 00:08:44.909 "r_mbytes_per_sec": 0, 00:08:44.910 "w_mbytes_per_sec": 0 00:08:44.910 }, 00:08:44.910 "claimed": false, 00:08:44.910 "zoned": false, 00:08:44.910 "supported_io_types": { 00:08:44.910 "read": true, 00:08:44.910 "write": true, 00:08:44.910 "unmap": true, 00:08:44.910 "flush": true, 00:08:44.910 "reset": true, 00:08:44.910 "nvme_admin": false, 00:08:44.910 "nvme_io": false, 00:08:44.910 "nvme_io_md": false, 00:08:44.910 "write_zeroes": true, 00:08:44.910 "zcopy": false, 00:08:44.910 "get_zone_info": false, 00:08:44.910 "zone_management": false, 00:08:44.910 "zone_append": false, 00:08:44.910 "compare": false, 00:08:44.910 "compare_and_write": false, 00:08:44.910 "abort": false, 00:08:44.910 "seek_hole": false, 00:08:44.910 "seek_data": false, 00:08:44.910 "copy": false, 00:08:44.910 "nvme_iov_md": false 00:08:44.910 }, 00:08:44.910 "memory_domains": [ 00:08:44.910 { 00:08:44.910 "dma_device_id": "system", 00:08:44.910 "dma_device_type": 1 00:08:44.910 }, 00:08:44.910 { 00:08:44.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.910 "dma_device_type": 2 00:08:44.910 }, 00:08:44.910 { 00:08:44.910 "dma_device_id": "system", 00:08:44.910 "dma_device_type": 1 00:08:44.910 }, 00:08:44.910 { 00:08:44.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.910 "dma_device_type": 2 00:08:44.910 }, 00:08:44.910 { 00:08:44.910 "dma_device_id": "system", 00:08:44.910 "dma_device_type": 1 00:08:44.910 }, 00:08:44.910 { 00:08:44.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.910 "dma_device_type": 2 00:08:44.910 } 00:08:44.910 ], 00:08:44.910 "driver_specific": { 00:08:44.910 "raid": { 00:08:44.910 "uuid": "8e3dde1a-1226-4121-aa7e-b64e4bf0a8dd", 00:08:44.910 "strip_size_kb": 64, 00:08:44.910 "state": "online", 00:08:44.910 "raid_level": "raid0", 00:08:44.910 "superblock": true, 00:08:44.910 "num_base_bdevs": 3, 00:08:44.910 "num_base_bdevs_discovered": 3, 00:08:44.910 "num_base_bdevs_operational": 3, 00:08:44.910 "base_bdevs_list": [ 00:08:44.910 { 00:08:44.910 "name": "NewBaseBdev", 00:08:44.910 "uuid": "24ce9e7f-fea8-43ca-b16e-8d6535a411d5", 00:08:44.910 "is_configured": true, 00:08:44.910 "data_offset": 2048, 00:08:44.910 "data_size": 63488 00:08:44.910 }, 00:08:44.910 { 00:08:44.910 "name": "BaseBdev2", 00:08:44.910 "uuid": "350323dc-18a0-46de-a26b-6095ffffd61e", 00:08:44.910 "is_configured": true, 00:08:44.910 "data_offset": 2048, 00:08:44.910 "data_size": 63488 00:08:44.910 }, 00:08:44.910 { 00:08:44.910 "name": "BaseBdev3", 00:08:44.910 "uuid": "fdd1c8be-72b1-4fab-93c4-705a256dfbe1", 00:08:44.910 "is_configured": true, 00:08:44.910 "data_offset": 2048, 00:08:44.910 "data_size": 63488 00:08:44.910 } 00:08:44.910 ] 00:08:44.910 } 00:08:44.910 } 00:08:44.910 }' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:44.910 BaseBdev2 00:08:44.910 BaseBdev3' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.910 [2024-10-05 08:45:21.346721] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.910 [2024-10-05 08:45:21.346746] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.910 [2024-10-05 08:45:21.346821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.910 [2024-10-05 08:45:21.346877] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.910 [2024-10-05 08:45:21.346889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63873 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 63873 ']' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 63873 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.910 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63873 00:08:45.170 killing process with pid 63873 00:08:45.170 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.170 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.170 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63873' 00:08:45.170 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 63873 00:08:45.170 [2024-10-05 08:45:21.398754] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.170 08:45:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 63873 00:08:45.430 [2024-10-05 08:45:21.709993] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.845 08:45:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:46.845 00:08:46.845 real 0m10.707s 00:08:46.845 user 0m16.654s 00:08:46.845 sys 0m1.996s 00:08:46.845 08:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.845 ************************************ 00:08:46.845 END TEST raid_state_function_test_sb 00:08:46.845 ************************************ 00:08:46.845 08:45:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.845 08:45:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:46.845 08:45:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:46.845 08:45:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.845 08:45:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.845 ************************************ 00:08:46.845 START TEST raid_superblock_test 00:08:46.845 ************************************ 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64433 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64433 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 64433 ']' 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:46.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:46.845 08:45:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.846 [2024-10-05 08:45:23.192561] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:46.846 [2024-10-05 08:45:23.192682] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64433 ] 00:08:47.106 [2024-10-05 08:45:23.358271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.367 [2024-10-05 08:45:23.594225] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.367 [2024-10-05 08:45:23.818343] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.367 [2024-10-05 08:45:23.818382] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.627 malloc1 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.627 [2024-10-05 08:45:24.063092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:47.627 [2024-10-05 08:45:24.063200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.627 [2024-10-05 08:45:24.063243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:47.627 [2024-10-05 08:45:24.063274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.627 [2024-10-05 08:45:24.065675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.627 [2024-10-05 08:45:24.065757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:47.627 pt1 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.627 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.887 malloc2 00:08:47.887 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.888 [2024-10-05 08:45:24.154460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:47.888 [2024-10-05 08:45:24.154550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.888 [2024-10-05 08:45:24.154590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:47.888 [2024-10-05 08:45:24.154618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.888 [2024-10-05 08:45:24.156957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.888 [2024-10-05 08:45:24.157034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:47.888 pt2 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.888 malloc3 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.888 [2024-10-05 08:45:24.215739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:47.888 [2024-10-05 08:45:24.215824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.888 [2024-10-05 08:45:24.215861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:47.888 [2024-10-05 08:45:24.215888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.888 [2024-10-05 08:45:24.218232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.888 [2024-10-05 08:45:24.218297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:47.888 pt3 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.888 [2024-10-05 08:45:24.227796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:47.888 [2024-10-05 08:45:24.229847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.888 [2024-10-05 08:45:24.229949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:47.888 [2024-10-05 08:45:24.230133] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:47.888 [2024-10-05 08:45:24.230180] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:47.888 [2024-10-05 08:45:24.230424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:47.888 [2024-10-05 08:45:24.230625] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:47.888 [2024-10-05 08:45:24.230665] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:47.888 [2024-10-05 08:45:24.230835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.888 "name": "raid_bdev1", 00:08:47.888 "uuid": "3955a7ba-2745-42f9-b3df-d3359e1756c1", 00:08:47.888 "strip_size_kb": 64, 00:08:47.888 "state": "online", 00:08:47.888 "raid_level": "raid0", 00:08:47.888 "superblock": true, 00:08:47.888 "num_base_bdevs": 3, 00:08:47.888 "num_base_bdevs_discovered": 3, 00:08:47.888 "num_base_bdevs_operational": 3, 00:08:47.888 "base_bdevs_list": [ 00:08:47.888 { 00:08:47.888 "name": "pt1", 00:08:47.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:47.888 "is_configured": true, 00:08:47.888 "data_offset": 2048, 00:08:47.888 "data_size": 63488 00:08:47.888 }, 00:08:47.888 { 00:08:47.888 "name": "pt2", 00:08:47.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.888 "is_configured": true, 00:08:47.888 "data_offset": 2048, 00:08:47.888 "data_size": 63488 00:08:47.888 }, 00:08:47.888 { 00:08:47.888 "name": "pt3", 00:08:47.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:47.888 "is_configured": true, 00:08:47.888 "data_offset": 2048, 00:08:47.888 "data_size": 63488 00:08:47.888 } 00:08:47.888 ] 00:08:47.888 }' 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.888 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.458 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:48.458 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:48.458 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.458 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.458 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.458 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.458 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.458 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.458 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.458 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.458 [2024-10-05 08:45:24.715289] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.458 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.458 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:48.458 "name": "raid_bdev1", 00:08:48.458 "aliases": [ 00:08:48.458 "3955a7ba-2745-42f9-b3df-d3359e1756c1" 00:08:48.458 ], 00:08:48.458 "product_name": "Raid Volume", 00:08:48.458 "block_size": 512, 00:08:48.458 "num_blocks": 190464, 00:08:48.458 "uuid": "3955a7ba-2745-42f9-b3df-d3359e1756c1", 00:08:48.458 "assigned_rate_limits": { 00:08:48.458 "rw_ios_per_sec": 0, 00:08:48.458 "rw_mbytes_per_sec": 0, 00:08:48.458 "r_mbytes_per_sec": 0, 00:08:48.458 "w_mbytes_per_sec": 0 00:08:48.458 }, 00:08:48.458 "claimed": false, 00:08:48.458 "zoned": false, 00:08:48.458 "supported_io_types": { 00:08:48.458 "read": true, 00:08:48.458 "write": true, 00:08:48.458 "unmap": true, 00:08:48.458 "flush": true, 00:08:48.458 "reset": true, 00:08:48.458 "nvme_admin": false, 00:08:48.458 "nvme_io": false, 00:08:48.458 "nvme_io_md": false, 00:08:48.458 "write_zeroes": true, 00:08:48.458 "zcopy": false, 00:08:48.458 "get_zone_info": false, 00:08:48.458 "zone_management": false, 00:08:48.458 "zone_append": false, 00:08:48.458 "compare": false, 00:08:48.458 "compare_and_write": false, 00:08:48.458 "abort": false, 00:08:48.458 "seek_hole": false, 00:08:48.458 "seek_data": false, 00:08:48.458 "copy": false, 00:08:48.458 "nvme_iov_md": false 00:08:48.458 }, 00:08:48.459 "memory_domains": [ 00:08:48.459 { 00:08:48.459 "dma_device_id": "system", 00:08:48.459 "dma_device_type": 1 00:08:48.459 }, 00:08:48.459 { 00:08:48.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.459 "dma_device_type": 2 00:08:48.459 }, 00:08:48.459 { 00:08:48.459 "dma_device_id": "system", 00:08:48.459 "dma_device_type": 1 00:08:48.459 }, 00:08:48.459 { 00:08:48.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.459 "dma_device_type": 2 00:08:48.459 }, 00:08:48.459 { 00:08:48.459 "dma_device_id": "system", 00:08:48.459 "dma_device_type": 1 00:08:48.459 }, 00:08:48.459 { 00:08:48.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.459 "dma_device_type": 2 00:08:48.459 } 00:08:48.459 ], 00:08:48.459 "driver_specific": { 00:08:48.459 "raid": { 00:08:48.459 "uuid": "3955a7ba-2745-42f9-b3df-d3359e1756c1", 00:08:48.459 "strip_size_kb": 64, 00:08:48.459 "state": "online", 00:08:48.459 "raid_level": "raid0", 00:08:48.459 "superblock": true, 00:08:48.459 "num_base_bdevs": 3, 00:08:48.459 "num_base_bdevs_discovered": 3, 00:08:48.459 "num_base_bdevs_operational": 3, 00:08:48.459 "base_bdevs_list": [ 00:08:48.459 { 00:08:48.459 "name": "pt1", 00:08:48.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.459 "is_configured": true, 00:08:48.459 "data_offset": 2048, 00:08:48.459 "data_size": 63488 00:08:48.459 }, 00:08:48.459 { 00:08:48.459 "name": "pt2", 00:08:48.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.459 "is_configured": true, 00:08:48.459 "data_offset": 2048, 00:08:48.459 "data_size": 63488 00:08:48.459 }, 00:08:48.459 { 00:08:48.459 "name": "pt3", 00:08:48.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:48.459 "is_configured": true, 00:08:48.459 "data_offset": 2048, 00:08:48.459 "data_size": 63488 00:08:48.459 } 00:08:48.459 ] 00:08:48.459 } 00:08:48.459 } 00:08:48.459 }' 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:48.459 pt2 00:08:48.459 pt3' 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.459 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.719 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.719 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.719 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.719 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:48.719 08:45:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.719 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.719 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 08:45:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 [2024-10-05 08:45:25.018643] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3955a7ba-2745-42f9-b3df-d3359e1756c1 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3955a7ba-2745-42f9-b3df-d3359e1756c1 ']' 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 [2024-10-05 08:45:25.050297] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:48.719 [2024-10-05 08:45:25.050329] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.719 [2024-10-05 08:45:25.050409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.719 [2024-10-05 08:45:25.050478] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.719 [2024-10-05 08:45:25.050489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.719 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.980 [2024-10-05 08:45:25.202089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:48.980 [2024-10-05 08:45:25.204286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:48.980 [2024-10-05 08:45:25.204383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:48.980 [2024-10-05 08:45:25.204456] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:48.980 [2024-10-05 08:45:25.204543] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:48.980 [2024-10-05 08:45:25.204563] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:48.980 [2024-10-05 08:45:25.204580] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:48.980 [2024-10-05 08:45:25.204589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:48.980 request: 00:08:48.980 { 00:08:48.980 "name": "raid_bdev1", 00:08:48.980 "raid_level": "raid0", 00:08:48.980 "base_bdevs": [ 00:08:48.980 "malloc1", 00:08:48.980 "malloc2", 00:08:48.980 "malloc3" 00:08:48.980 ], 00:08:48.980 "strip_size_kb": 64, 00:08:48.980 "superblock": false, 00:08:48.980 "method": "bdev_raid_create", 00:08:48.980 "req_id": 1 00:08:48.980 } 00:08:48.980 Got JSON-RPC error response 00:08:48.980 response: 00:08:48.980 { 00:08:48.980 "code": -17, 00:08:48.980 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:48.980 } 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.980 [2024-10-05 08:45:25.257945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:48.980 [2024-10-05 08:45:25.258037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.980 [2024-10-05 08:45:25.258073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:48.980 [2024-10-05 08:45:25.258100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.980 [2024-10-05 08:45:25.260535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.980 [2024-10-05 08:45:25.260601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:48.980 [2024-10-05 08:45:25.260690] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:48.980 [2024-10-05 08:45:25.260757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:48.980 pt1 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.980 "name": "raid_bdev1", 00:08:48.980 "uuid": "3955a7ba-2745-42f9-b3df-d3359e1756c1", 00:08:48.980 "strip_size_kb": 64, 00:08:48.980 "state": "configuring", 00:08:48.980 "raid_level": "raid0", 00:08:48.980 "superblock": true, 00:08:48.980 "num_base_bdevs": 3, 00:08:48.980 "num_base_bdevs_discovered": 1, 00:08:48.980 "num_base_bdevs_operational": 3, 00:08:48.980 "base_bdevs_list": [ 00:08:48.980 { 00:08:48.980 "name": "pt1", 00:08:48.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.980 "is_configured": true, 00:08:48.980 "data_offset": 2048, 00:08:48.980 "data_size": 63488 00:08:48.980 }, 00:08:48.980 { 00:08:48.980 "name": null, 00:08:48.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.980 "is_configured": false, 00:08:48.980 "data_offset": 2048, 00:08:48.980 "data_size": 63488 00:08:48.980 }, 00:08:48.980 { 00:08:48.980 "name": null, 00:08:48.980 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:48.980 "is_configured": false, 00:08:48.980 "data_offset": 2048, 00:08:48.980 "data_size": 63488 00:08:48.980 } 00:08:48.980 ] 00:08:48.980 }' 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.980 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.241 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:49.241 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:49.241 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.241 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.241 [2024-10-05 08:45:25.709228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:49.241 [2024-10-05 08:45:25.709306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.241 [2024-10-05 08:45:25.709340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:49.241 [2024-10-05 08:45:25.709350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.241 [2024-10-05 08:45:25.709856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.241 [2024-10-05 08:45:25.709872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:49.241 [2024-10-05 08:45:25.709980] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:49.241 [2024-10-05 08:45:25.710005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:49.501 pt2 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.501 [2024-10-05 08:45:25.717198] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.501 "name": "raid_bdev1", 00:08:49.501 "uuid": "3955a7ba-2745-42f9-b3df-d3359e1756c1", 00:08:49.501 "strip_size_kb": 64, 00:08:49.501 "state": "configuring", 00:08:49.501 "raid_level": "raid0", 00:08:49.501 "superblock": true, 00:08:49.501 "num_base_bdevs": 3, 00:08:49.501 "num_base_bdevs_discovered": 1, 00:08:49.501 "num_base_bdevs_operational": 3, 00:08:49.501 "base_bdevs_list": [ 00:08:49.501 { 00:08:49.501 "name": "pt1", 00:08:49.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.501 "is_configured": true, 00:08:49.501 "data_offset": 2048, 00:08:49.501 "data_size": 63488 00:08:49.501 }, 00:08:49.501 { 00:08:49.501 "name": null, 00:08:49.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.501 "is_configured": false, 00:08:49.501 "data_offset": 0, 00:08:49.501 "data_size": 63488 00:08:49.501 }, 00:08:49.501 { 00:08:49.501 "name": null, 00:08:49.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:49.501 "is_configured": false, 00:08:49.501 "data_offset": 2048, 00:08:49.501 "data_size": 63488 00:08:49.501 } 00:08:49.501 ] 00:08:49.501 }' 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.501 08:45:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.762 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:49.762 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:49.762 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:49.762 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.762 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.762 [2024-10-05 08:45:26.148493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:49.762 [2024-10-05 08:45:26.148611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.762 [2024-10-05 08:45:26.148646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:49.762 [2024-10-05 08:45:26.148676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.762 [2024-10-05 08:45:26.149259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.762 [2024-10-05 08:45:26.149321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:49.762 [2024-10-05 08:45:26.149440] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:49.762 [2024-10-05 08:45:26.149508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:49.762 pt2 00:08:49.762 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.762 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:49.762 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:49.762 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:49.762 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.763 [2024-10-05 08:45:26.160475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:49.763 [2024-10-05 08:45:26.160554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.763 [2024-10-05 08:45:26.160581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:49.763 [2024-10-05 08:45:26.160606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.763 [2024-10-05 08:45:26.161044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.763 [2024-10-05 08:45:26.161103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:49.763 [2024-10-05 08:45:26.161190] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:49.763 [2024-10-05 08:45:26.161237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:49.763 [2024-10-05 08:45:26.161385] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:49.763 [2024-10-05 08:45:26.161422] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:49.763 [2024-10-05 08:45:26.161698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:49.763 [2024-10-05 08:45:26.161880] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:49.763 [2024-10-05 08:45:26.161916] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:49.763 [2024-10-05 08:45:26.162109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.763 pt3 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.763 "name": "raid_bdev1", 00:08:49.763 "uuid": "3955a7ba-2745-42f9-b3df-d3359e1756c1", 00:08:49.763 "strip_size_kb": 64, 00:08:49.763 "state": "online", 00:08:49.763 "raid_level": "raid0", 00:08:49.763 "superblock": true, 00:08:49.763 "num_base_bdevs": 3, 00:08:49.763 "num_base_bdevs_discovered": 3, 00:08:49.763 "num_base_bdevs_operational": 3, 00:08:49.763 "base_bdevs_list": [ 00:08:49.763 { 00:08:49.763 "name": "pt1", 00:08:49.763 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.763 "is_configured": true, 00:08:49.763 "data_offset": 2048, 00:08:49.763 "data_size": 63488 00:08:49.763 }, 00:08:49.763 { 00:08:49.763 "name": "pt2", 00:08:49.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.763 "is_configured": true, 00:08:49.763 "data_offset": 2048, 00:08:49.763 "data_size": 63488 00:08:49.763 }, 00:08:49.763 { 00:08:49.763 "name": "pt3", 00:08:49.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:49.763 "is_configured": true, 00:08:49.763 "data_offset": 2048, 00:08:49.763 "data_size": 63488 00:08:49.763 } 00:08:49.763 ] 00:08:49.763 }' 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.763 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.333 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:50.333 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:50.333 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:50.333 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:50.333 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:50.333 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:50.333 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:50.333 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.333 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.333 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:50.333 [2024-10-05 08:45:26.603986] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.333 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.333 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:50.333 "name": "raid_bdev1", 00:08:50.333 "aliases": [ 00:08:50.333 "3955a7ba-2745-42f9-b3df-d3359e1756c1" 00:08:50.333 ], 00:08:50.333 "product_name": "Raid Volume", 00:08:50.333 "block_size": 512, 00:08:50.333 "num_blocks": 190464, 00:08:50.333 "uuid": "3955a7ba-2745-42f9-b3df-d3359e1756c1", 00:08:50.333 "assigned_rate_limits": { 00:08:50.333 "rw_ios_per_sec": 0, 00:08:50.333 "rw_mbytes_per_sec": 0, 00:08:50.333 "r_mbytes_per_sec": 0, 00:08:50.333 "w_mbytes_per_sec": 0 00:08:50.333 }, 00:08:50.333 "claimed": false, 00:08:50.333 "zoned": false, 00:08:50.333 "supported_io_types": { 00:08:50.333 "read": true, 00:08:50.333 "write": true, 00:08:50.333 "unmap": true, 00:08:50.333 "flush": true, 00:08:50.333 "reset": true, 00:08:50.333 "nvme_admin": false, 00:08:50.333 "nvme_io": false, 00:08:50.333 "nvme_io_md": false, 00:08:50.333 "write_zeroes": true, 00:08:50.333 "zcopy": false, 00:08:50.333 "get_zone_info": false, 00:08:50.333 "zone_management": false, 00:08:50.333 "zone_append": false, 00:08:50.333 "compare": false, 00:08:50.333 "compare_and_write": false, 00:08:50.333 "abort": false, 00:08:50.333 "seek_hole": false, 00:08:50.333 "seek_data": false, 00:08:50.333 "copy": false, 00:08:50.333 "nvme_iov_md": false 00:08:50.333 }, 00:08:50.333 "memory_domains": [ 00:08:50.333 { 00:08:50.333 "dma_device_id": "system", 00:08:50.333 "dma_device_type": 1 00:08:50.333 }, 00:08:50.333 { 00:08:50.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.333 "dma_device_type": 2 00:08:50.333 }, 00:08:50.333 { 00:08:50.333 "dma_device_id": "system", 00:08:50.333 "dma_device_type": 1 00:08:50.333 }, 00:08:50.333 { 00:08:50.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.333 "dma_device_type": 2 00:08:50.334 }, 00:08:50.334 { 00:08:50.334 "dma_device_id": "system", 00:08:50.334 "dma_device_type": 1 00:08:50.334 }, 00:08:50.334 { 00:08:50.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.334 "dma_device_type": 2 00:08:50.334 } 00:08:50.334 ], 00:08:50.334 "driver_specific": { 00:08:50.334 "raid": { 00:08:50.334 "uuid": "3955a7ba-2745-42f9-b3df-d3359e1756c1", 00:08:50.334 "strip_size_kb": 64, 00:08:50.334 "state": "online", 00:08:50.334 "raid_level": "raid0", 00:08:50.334 "superblock": true, 00:08:50.334 "num_base_bdevs": 3, 00:08:50.334 "num_base_bdevs_discovered": 3, 00:08:50.334 "num_base_bdevs_operational": 3, 00:08:50.334 "base_bdevs_list": [ 00:08:50.334 { 00:08:50.334 "name": "pt1", 00:08:50.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.334 "is_configured": true, 00:08:50.334 "data_offset": 2048, 00:08:50.334 "data_size": 63488 00:08:50.334 }, 00:08:50.334 { 00:08:50.334 "name": "pt2", 00:08:50.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.334 "is_configured": true, 00:08:50.334 "data_offset": 2048, 00:08:50.334 "data_size": 63488 00:08:50.334 }, 00:08:50.334 { 00:08:50.334 "name": "pt3", 00:08:50.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.334 "is_configured": true, 00:08:50.334 "data_offset": 2048, 00:08:50.334 "data_size": 63488 00:08:50.334 } 00:08:50.334 ] 00:08:50.334 } 00:08:50.334 } 00:08:50.334 }' 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:50.334 pt2 00:08:50.334 pt3' 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.334 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.594 [2024-10-05 08:45:26.895399] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3955a7ba-2745-42f9-b3df-d3359e1756c1 '!=' 3955a7ba-2745-42f9-b3df-d3359e1756c1 ']' 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64433 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 64433 ']' 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 64433 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64433 00:08:50.594 killing process with pid 64433 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64433' 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 64433 00:08:50.594 [2024-10-05 08:45:26.943020] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.594 [2024-10-05 08:45:26.943112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.594 [2024-10-05 08:45:26.943171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.594 [2024-10-05 08:45:26.943185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:50.594 08:45:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 64433 00:08:50.854 [2024-10-05 08:45:27.251978] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.235 08:45:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:52.235 00:08:52.235 real 0m5.504s 00:08:52.235 user 0m7.641s 00:08:52.235 sys 0m1.013s 00:08:52.235 ************************************ 00:08:52.235 END TEST raid_superblock_test 00:08:52.235 ************************************ 00:08:52.235 08:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.235 08:45:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.235 08:45:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:52.235 08:45:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:52.235 08:45:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.235 08:45:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.235 ************************************ 00:08:52.235 START TEST raid_read_error_test 00:08:52.235 ************************************ 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VfrXMAtVKP 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64656 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64656 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 64656 ']' 00:08:52.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.235 08:45:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.496 [2024-10-05 08:45:28.793566] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:52.496 [2024-10-05 08:45:28.793682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64656 ] 00:08:52.496 [2024-10-05 08:45:28.959916] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.756 [2024-10-05 08:45:29.204262] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.015 [2024-10-05 08:45:29.428608] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.015 [2024-10-05 08:45:29.428644] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.275 BaseBdev1_malloc 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.275 true 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.275 [2024-10-05 08:45:29.677252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:53.275 [2024-10-05 08:45:29.677353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.275 [2024-10-05 08:45:29.677389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:53.275 [2024-10-05 08:45:29.677420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.275 [2024-10-05 08:45:29.679807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.275 [2024-10-05 08:45:29.679877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:53.275 BaseBdev1 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.275 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.536 BaseBdev2_malloc 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.536 true 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.536 [2024-10-05 08:45:29.779895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:53.536 [2024-10-05 08:45:29.780005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.536 [2024-10-05 08:45:29.780039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:53.536 [2024-10-05 08:45:29.780070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.536 [2024-10-05 08:45:29.782426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.536 [2024-10-05 08:45:29.782497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:53.536 BaseBdev2 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.536 BaseBdev3_malloc 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.536 true 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.536 [2024-10-05 08:45:29.853991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:53.536 [2024-10-05 08:45:29.854082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.536 [2024-10-05 08:45:29.854115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:53.536 [2024-10-05 08:45:29.854145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.536 [2024-10-05 08:45:29.856488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.536 [2024-10-05 08:45:29.856558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:53.536 BaseBdev3 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.536 [2024-10-05 08:45:29.866035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.536 [2024-10-05 08:45:29.868034] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.536 [2024-10-05 08:45:29.868114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.536 [2024-10-05 08:45:29.868309] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:53.536 [2024-10-05 08:45:29.868321] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:53.536 [2024-10-05 08:45:29.868568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:53.536 [2024-10-05 08:45:29.868712] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:53.536 [2024-10-05 08:45:29.868724] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:53.536 [2024-10-05 08:45:29.868888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.536 "name": "raid_bdev1", 00:08:53.536 "uuid": "836636ce-edfe-4b73-a6fb-a058508dc028", 00:08:53.536 "strip_size_kb": 64, 00:08:53.536 "state": "online", 00:08:53.536 "raid_level": "raid0", 00:08:53.536 "superblock": true, 00:08:53.536 "num_base_bdevs": 3, 00:08:53.536 "num_base_bdevs_discovered": 3, 00:08:53.536 "num_base_bdevs_operational": 3, 00:08:53.536 "base_bdevs_list": [ 00:08:53.536 { 00:08:53.536 "name": "BaseBdev1", 00:08:53.536 "uuid": "efd0bb91-ac0c-52f3-8194-56e4cffd291c", 00:08:53.536 "is_configured": true, 00:08:53.536 "data_offset": 2048, 00:08:53.536 "data_size": 63488 00:08:53.536 }, 00:08:53.536 { 00:08:53.536 "name": "BaseBdev2", 00:08:53.536 "uuid": "88307a65-6582-5e5c-b6c3-3f18af2fd539", 00:08:53.536 "is_configured": true, 00:08:53.536 "data_offset": 2048, 00:08:53.536 "data_size": 63488 00:08:53.536 }, 00:08:53.536 { 00:08:53.536 "name": "BaseBdev3", 00:08:53.536 "uuid": "1353c62a-0565-5e7d-90a7-0acd01fbcb98", 00:08:53.536 "is_configured": true, 00:08:53.536 "data_offset": 2048, 00:08:53.536 "data_size": 63488 00:08:53.536 } 00:08:53.536 ] 00:08:53.536 }' 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.536 08:45:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.108 08:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:54.108 08:45:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:54.108 [2024-10-05 08:45:30.382402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.048 "name": "raid_bdev1", 00:08:55.048 "uuid": "836636ce-edfe-4b73-a6fb-a058508dc028", 00:08:55.048 "strip_size_kb": 64, 00:08:55.048 "state": "online", 00:08:55.048 "raid_level": "raid0", 00:08:55.048 "superblock": true, 00:08:55.048 "num_base_bdevs": 3, 00:08:55.048 "num_base_bdevs_discovered": 3, 00:08:55.048 "num_base_bdevs_operational": 3, 00:08:55.048 "base_bdevs_list": [ 00:08:55.048 { 00:08:55.048 "name": "BaseBdev1", 00:08:55.048 "uuid": "efd0bb91-ac0c-52f3-8194-56e4cffd291c", 00:08:55.048 "is_configured": true, 00:08:55.048 "data_offset": 2048, 00:08:55.048 "data_size": 63488 00:08:55.048 }, 00:08:55.048 { 00:08:55.048 "name": "BaseBdev2", 00:08:55.048 "uuid": "88307a65-6582-5e5c-b6c3-3f18af2fd539", 00:08:55.048 "is_configured": true, 00:08:55.048 "data_offset": 2048, 00:08:55.048 "data_size": 63488 00:08:55.048 }, 00:08:55.048 { 00:08:55.048 "name": "BaseBdev3", 00:08:55.048 "uuid": "1353c62a-0565-5e7d-90a7-0acd01fbcb98", 00:08:55.048 "is_configured": true, 00:08:55.048 "data_offset": 2048, 00:08:55.048 "data_size": 63488 00:08:55.048 } 00:08:55.048 ] 00:08:55.048 }' 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.048 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.308 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.308 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.308 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.308 [2024-10-05 08:45:31.766612] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.308 [2024-10-05 08:45:31.766714] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.308 [2024-10-05 08:45:31.769257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.308 [2024-10-05 08:45:31.769306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.308 [2024-10-05 08:45:31.769349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.308 [2024-10-05 08:45:31.769359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:55.308 { 00:08:55.308 "results": [ 00:08:55.308 { 00:08:55.308 "job": "raid_bdev1", 00:08:55.308 "core_mask": "0x1", 00:08:55.308 "workload": "randrw", 00:08:55.308 "percentage": 50, 00:08:55.308 "status": "finished", 00:08:55.308 "queue_depth": 1, 00:08:55.308 "io_size": 131072, 00:08:55.308 "runtime": 1.385069, 00:08:55.308 "iops": 14693.852797225265, 00:08:55.308 "mibps": 1836.7315996531581, 00:08:55.308 "io_failed": 1, 00:08:55.308 "io_timeout": 0, 00:08:55.308 "avg_latency_us": 95.94851671491624, 00:08:55.308 "min_latency_us": 24.705676855895195, 00:08:55.308 "max_latency_us": 1352.216593886463 00:08:55.308 } 00:08:55.308 ], 00:08:55.308 "core_count": 1 00:08:55.308 } 00:08:55.308 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.308 08:45:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64656 00:08:55.308 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 64656 ']' 00:08:55.308 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 64656 00:08:55.308 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:55.567 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.567 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64656 00:08:55.567 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.567 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.567 killing process with pid 64656 00:08:55.567 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64656' 00:08:55.567 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 64656 00:08:55.567 [2024-10-05 08:45:31.815216] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.567 08:45:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 64656 00:08:55.827 [2024-10-05 08:45:32.054366] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.210 08:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:57.210 08:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VfrXMAtVKP 00:08:57.210 08:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:57.210 08:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:57.210 08:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:57.210 08:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.210 08:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:57.210 08:45:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:57.210 00:08:57.210 real 0m4.755s 00:08:57.210 user 0m5.458s 00:08:57.210 sys 0m0.686s 00:08:57.210 ************************************ 00:08:57.210 END TEST raid_read_error_test 00:08:57.210 ************************************ 00:08:57.210 08:45:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.210 08:45:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.210 08:45:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:57.210 08:45:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:57.210 08:45:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.210 08:45:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.210 ************************************ 00:08:57.210 START TEST raid_write_error_test 00:08:57.210 ************************************ 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:57.210 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.I1XhxurUX5 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64777 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64777 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 64777 ']' 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:57.211 08:45:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.211 [2024-10-05 08:45:33.609013] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:08:57.211 [2024-10-05 08:45:33.609217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64777 ] 00:08:57.471 [2024-10-05 08:45:33.772043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.731 [2024-10-05 08:45:34.019078] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.990 [2024-10-05 08:45:34.246215] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.990 [2024-10-05 08:45:34.246253] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.990 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.990 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:57.990 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.990 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:57.990 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.990 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.250 BaseBdev1_malloc 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.251 true 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.251 [2024-10-05 08:45:34.491411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:58.251 [2024-10-05 08:45:34.491506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.251 [2024-10-05 08:45:34.491541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:58.251 [2024-10-05 08:45:34.491572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.251 [2024-10-05 08:45:34.493907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.251 [2024-10-05 08:45:34.493994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:58.251 BaseBdev1 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.251 BaseBdev2_malloc 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.251 true 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.251 [2024-10-05 08:45:34.592244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:58.251 [2024-10-05 08:45:34.592334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.251 [2024-10-05 08:45:34.592366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:58.251 [2024-10-05 08:45:34.592392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.251 [2024-10-05 08:45:34.594637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.251 [2024-10-05 08:45:34.594706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:58.251 BaseBdev2 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.251 BaseBdev3_malloc 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.251 true 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.251 [2024-10-05 08:45:34.663825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:58.251 [2024-10-05 08:45:34.663872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.251 [2024-10-05 08:45:34.663888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:58.251 [2024-10-05 08:45:34.663900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.251 [2024-10-05 08:45:34.666154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.251 [2024-10-05 08:45:34.666190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:58.251 BaseBdev3 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.251 [2024-10-05 08:45:34.675877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.251 [2024-10-05 08:45:34.677861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.251 [2024-10-05 08:45:34.677998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.251 [2024-10-05 08:45:34.678195] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:58.251 [2024-10-05 08:45:34.678207] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:58.251 [2024-10-05 08:45:34.678438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:58.251 [2024-10-05 08:45:34.678584] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:58.251 [2024-10-05 08:45:34.678595] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:58.251 [2024-10-05 08:45:34.678740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.251 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.511 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.511 "name": "raid_bdev1", 00:08:58.511 "uuid": "80e17a25-04d8-4c30-9d29-f5ffbeda8050", 00:08:58.511 "strip_size_kb": 64, 00:08:58.511 "state": "online", 00:08:58.511 "raid_level": "raid0", 00:08:58.511 "superblock": true, 00:08:58.511 "num_base_bdevs": 3, 00:08:58.511 "num_base_bdevs_discovered": 3, 00:08:58.511 "num_base_bdevs_operational": 3, 00:08:58.511 "base_bdevs_list": [ 00:08:58.511 { 00:08:58.511 "name": "BaseBdev1", 00:08:58.511 "uuid": "3438b4ef-4601-5915-b393-91a8096612e3", 00:08:58.511 "is_configured": true, 00:08:58.511 "data_offset": 2048, 00:08:58.511 "data_size": 63488 00:08:58.511 }, 00:08:58.511 { 00:08:58.511 "name": "BaseBdev2", 00:08:58.511 "uuid": "b3f3f509-bfd2-5a2c-ad09-965fc79b92e0", 00:08:58.511 "is_configured": true, 00:08:58.511 "data_offset": 2048, 00:08:58.511 "data_size": 63488 00:08:58.511 }, 00:08:58.511 { 00:08:58.511 "name": "BaseBdev3", 00:08:58.511 "uuid": "8ada1d94-24fc-5d0f-95bd-e7bc6e67b8a8", 00:08:58.511 "is_configured": true, 00:08:58.511 "data_offset": 2048, 00:08:58.511 "data_size": 63488 00:08:58.511 } 00:08:58.511 ] 00:08:58.511 }' 00:08:58.511 08:45:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.511 08:45:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.771 08:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:58.771 08:45:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:59.031 [2024-10-05 08:45:35.244328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.971 "name": "raid_bdev1", 00:08:59.971 "uuid": "80e17a25-04d8-4c30-9d29-f5ffbeda8050", 00:08:59.971 "strip_size_kb": 64, 00:08:59.971 "state": "online", 00:08:59.971 "raid_level": "raid0", 00:08:59.971 "superblock": true, 00:08:59.971 "num_base_bdevs": 3, 00:08:59.971 "num_base_bdevs_discovered": 3, 00:08:59.971 "num_base_bdevs_operational": 3, 00:08:59.971 "base_bdevs_list": [ 00:08:59.971 { 00:08:59.971 "name": "BaseBdev1", 00:08:59.971 "uuid": "3438b4ef-4601-5915-b393-91a8096612e3", 00:08:59.971 "is_configured": true, 00:08:59.971 "data_offset": 2048, 00:08:59.971 "data_size": 63488 00:08:59.971 }, 00:08:59.971 { 00:08:59.971 "name": "BaseBdev2", 00:08:59.971 "uuid": "b3f3f509-bfd2-5a2c-ad09-965fc79b92e0", 00:08:59.971 "is_configured": true, 00:08:59.971 "data_offset": 2048, 00:08:59.971 "data_size": 63488 00:08:59.971 }, 00:08:59.971 { 00:08:59.971 "name": "BaseBdev3", 00:08:59.971 "uuid": "8ada1d94-24fc-5d0f-95bd-e7bc6e67b8a8", 00:08:59.971 "is_configured": true, 00:08:59.971 "data_offset": 2048, 00:08:59.971 "data_size": 63488 00:08:59.971 } 00:08:59.971 ] 00:08:59.971 }' 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.971 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.231 [2024-10-05 08:45:36.564540] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.231 [2024-10-05 08:45:36.564581] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.231 [2024-10-05 08:45:36.567174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.231 [2024-10-05 08:45:36.567223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.231 [2024-10-05 08:45:36.567264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.231 [2024-10-05 08:45:36.567274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:00.231 { 00:09:00.231 "results": [ 00:09:00.231 { 00:09:00.231 "job": "raid_bdev1", 00:09:00.231 "core_mask": "0x1", 00:09:00.231 "workload": "randrw", 00:09:00.231 "percentage": 50, 00:09:00.231 "status": "finished", 00:09:00.231 "queue_depth": 1, 00:09:00.231 "io_size": 131072, 00:09:00.231 "runtime": 1.3207, 00:09:00.231 "iops": 14331.036571515106, 00:09:00.231 "mibps": 1791.3795714393882, 00:09:00.231 "io_failed": 1, 00:09:00.231 "io_timeout": 0, 00:09:00.231 "avg_latency_us": 98.29330360603456, 00:09:00.231 "min_latency_us": 24.593886462882097, 00:09:00.231 "max_latency_us": 1345.0620087336245 00:09:00.231 } 00:09:00.231 ], 00:09:00.231 "core_count": 1 00:09:00.231 } 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64777 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 64777 ']' 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 64777 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64777 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64777' 00:09:00.231 killing process with pid 64777 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 64777 00:09:00.231 [2024-10-05 08:45:36.616739] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.231 08:45:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 64777 00:09:00.492 [2024-10-05 08:45:36.862081] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.878 08:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.I1XhxurUX5 00:09:01.878 08:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:01.878 08:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:01.878 08:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:01.878 08:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:01.878 ************************************ 00:09:01.878 END TEST raid_write_error_test 00:09:01.878 ************************************ 00:09:01.878 08:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.878 08:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:01.878 08:45:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:01.878 00:09:01.878 real 0m4.750s 00:09:01.878 user 0m5.451s 00:09:01.878 sys 0m0.675s 00:09:01.878 08:45:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.878 08:45:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.878 08:45:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:01.878 08:45:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:01.878 08:45:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:01.878 08:45:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.878 08:45:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.878 ************************************ 00:09:01.878 START TEST raid_state_function_test 00:09:01.878 ************************************ 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:01.878 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:01.879 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:01.879 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64891 00:09:01.879 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:01.879 Process raid pid: 64891 00:09:01.879 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64891' 00:09:01.879 08:45:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64891 00:09:01.879 08:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 64891 ']' 00:09:01.879 08:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.879 08:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.879 08:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.879 08:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.879 08:45:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.148 [2024-10-05 08:45:38.426151] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:02.148 [2024-10-05 08:45:38.426384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.148 [2024-10-05 08:45:38.590005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.407 [2024-10-05 08:45:38.832942] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.667 [2024-10-05 08:45:39.063988] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.668 [2024-10-05 08:45:39.064029] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.928 [2024-10-05 08:45:39.243826] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.928 [2024-10-05 08:45:39.243891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.928 [2024-10-05 08:45:39.243901] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.928 [2024-10-05 08:45:39.243912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.928 [2024-10-05 08:45:39.243918] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:02.928 [2024-10-05 08:45:39.243926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.928 "name": "Existed_Raid", 00:09:02.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.928 "strip_size_kb": 64, 00:09:02.928 "state": "configuring", 00:09:02.928 "raid_level": "concat", 00:09:02.928 "superblock": false, 00:09:02.928 "num_base_bdevs": 3, 00:09:02.928 "num_base_bdevs_discovered": 0, 00:09:02.928 "num_base_bdevs_operational": 3, 00:09:02.928 "base_bdevs_list": [ 00:09:02.928 { 00:09:02.928 "name": "BaseBdev1", 00:09:02.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.928 "is_configured": false, 00:09:02.928 "data_offset": 0, 00:09:02.928 "data_size": 0 00:09:02.928 }, 00:09:02.928 { 00:09:02.928 "name": "BaseBdev2", 00:09:02.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.928 "is_configured": false, 00:09:02.928 "data_offset": 0, 00:09:02.928 "data_size": 0 00:09:02.928 }, 00:09:02.928 { 00:09:02.928 "name": "BaseBdev3", 00:09:02.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.928 "is_configured": false, 00:09:02.928 "data_offset": 0, 00:09:02.928 "data_size": 0 00:09:02.928 } 00:09:02.928 ] 00:09:02.928 }' 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.928 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.499 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.499 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.499 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.499 [2024-10-05 08:45:39.686978] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.499 [2024-10-05 08:45:39.687081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:03.499 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.499 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:03.499 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.499 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.499 [2024-10-05 08:45:39.694998] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.499 [2024-10-05 08:45:39.695040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.499 [2024-10-05 08:45:39.695048] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.499 [2024-10-05 08:45:39.695058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.500 [2024-10-05 08:45:39.695064] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.500 [2024-10-05 08:45:39.695072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.500 BaseBdev1 00:09:03.500 [2024-10-05 08:45:39.773627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.500 [ 00:09:03.500 { 00:09:03.500 "name": "BaseBdev1", 00:09:03.500 "aliases": [ 00:09:03.500 "0d3b18e6-8a8a-4311-b9d8-3578830c5b5a" 00:09:03.500 ], 00:09:03.500 "product_name": "Malloc disk", 00:09:03.500 "block_size": 512, 00:09:03.500 "num_blocks": 65536, 00:09:03.500 "uuid": "0d3b18e6-8a8a-4311-b9d8-3578830c5b5a", 00:09:03.500 "assigned_rate_limits": { 00:09:03.500 "rw_ios_per_sec": 0, 00:09:03.500 "rw_mbytes_per_sec": 0, 00:09:03.500 "r_mbytes_per_sec": 0, 00:09:03.500 "w_mbytes_per_sec": 0 00:09:03.500 }, 00:09:03.500 "claimed": true, 00:09:03.500 "claim_type": "exclusive_write", 00:09:03.500 "zoned": false, 00:09:03.500 "supported_io_types": { 00:09:03.500 "read": true, 00:09:03.500 "write": true, 00:09:03.500 "unmap": true, 00:09:03.500 "flush": true, 00:09:03.500 "reset": true, 00:09:03.500 "nvme_admin": false, 00:09:03.500 "nvme_io": false, 00:09:03.500 "nvme_io_md": false, 00:09:03.500 "write_zeroes": true, 00:09:03.500 "zcopy": true, 00:09:03.500 "get_zone_info": false, 00:09:03.500 "zone_management": false, 00:09:03.500 "zone_append": false, 00:09:03.500 "compare": false, 00:09:03.500 "compare_and_write": false, 00:09:03.500 "abort": true, 00:09:03.500 "seek_hole": false, 00:09:03.500 "seek_data": false, 00:09:03.500 "copy": true, 00:09:03.500 "nvme_iov_md": false 00:09:03.500 }, 00:09:03.500 "memory_domains": [ 00:09:03.500 { 00:09:03.500 "dma_device_id": "system", 00:09:03.500 "dma_device_type": 1 00:09:03.500 }, 00:09:03.500 { 00:09:03.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.500 "dma_device_type": 2 00:09:03.500 } 00:09:03.500 ], 00:09:03.500 "driver_specific": {} 00:09:03.500 } 00:09:03.500 ] 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.500 "name": "Existed_Raid", 00:09:03.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.500 "strip_size_kb": 64, 00:09:03.500 "state": "configuring", 00:09:03.500 "raid_level": "concat", 00:09:03.500 "superblock": false, 00:09:03.500 "num_base_bdevs": 3, 00:09:03.500 "num_base_bdevs_discovered": 1, 00:09:03.500 "num_base_bdevs_operational": 3, 00:09:03.500 "base_bdevs_list": [ 00:09:03.500 { 00:09:03.500 "name": "BaseBdev1", 00:09:03.500 "uuid": "0d3b18e6-8a8a-4311-b9d8-3578830c5b5a", 00:09:03.500 "is_configured": true, 00:09:03.500 "data_offset": 0, 00:09:03.500 "data_size": 65536 00:09:03.500 }, 00:09:03.500 { 00:09:03.500 "name": "BaseBdev2", 00:09:03.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.500 "is_configured": false, 00:09:03.500 "data_offset": 0, 00:09:03.500 "data_size": 0 00:09:03.500 }, 00:09:03.500 { 00:09:03.500 "name": "BaseBdev3", 00:09:03.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.500 "is_configured": false, 00:09:03.500 "data_offset": 0, 00:09:03.500 "data_size": 0 00:09:03.500 } 00:09:03.500 ] 00:09:03.500 }' 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.500 08:45:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.760 [2024-10-05 08:45:40.212869] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.760 [2024-10-05 08:45:40.212911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.760 [2024-10-05 08:45:40.220903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.760 [2024-10-05 08:45:40.222996] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.760 [2024-10-05 08:45:40.223071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.760 [2024-10-05 08:45:40.223111] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.760 [2024-10-05 08:45:40.223133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.760 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.020 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.020 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.020 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.020 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.020 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.020 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.020 "name": "Existed_Raid", 00:09:04.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.020 "strip_size_kb": 64, 00:09:04.020 "state": "configuring", 00:09:04.020 "raid_level": "concat", 00:09:04.020 "superblock": false, 00:09:04.020 "num_base_bdevs": 3, 00:09:04.020 "num_base_bdevs_discovered": 1, 00:09:04.020 "num_base_bdevs_operational": 3, 00:09:04.020 "base_bdevs_list": [ 00:09:04.020 { 00:09:04.020 "name": "BaseBdev1", 00:09:04.020 "uuid": "0d3b18e6-8a8a-4311-b9d8-3578830c5b5a", 00:09:04.020 "is_configured": true, 00:09:04.020 "data_offset": 0, 00:09:04.020 "data_size": 65536 00:09:04.020 }, 00:09:04.020 { 00:09:04.020 "name": "BaseBdev2", 00:09:04.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.020 "is_configured": false, 00:09:04.020 "data_offset": 0, 00:09:04.020 "data_size": 0 00:09:04.020 }, 00:09:04.020 { 00:09:04.020 "name": "BaseBdev3", 00:09:04.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.020 "is_configured": false, 00:09:04.020 "data_offset": 0, 00:09:04.020 "data_size": 0 00:09:04.020 } 00:09:04.020 ] 00:09:04.020 }' 00:09:04.020 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.020 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.280 [2024-10-05 08:45:40.687606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.280 BaseBdev2 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.280 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.280 [ 00:09:04.280 { 00:09:04.280 "name": "BaseBdev2", 00:09:04.280 "aliases": [ 00:09:04.280 "3a4a34d9-adfc-4db5-8dff-68b805767ee5" 00:09:04.280 ], 00:09:04.280 "product_name": "Malloc disk", 00:09:04.280 "block_size": 512, 00:09:04.280 "num_blocks": 65536, 00:09:04.280 "uuid": "3a4a34d9-adfc-4db5-8dff-68b805767ee5", 00:09:04.280 "assigned_rate_limits": { 00:09:04.280 "rw_ios_per_sec": 0, 00:09:04.280 "rw_mbytes_per_sec": 0, 00:09:04.280 "r_mbytes_per_sec": 0, 00:09:04.280 "w_mbytes_per_sec": 0 00:09:04.280 }, 00:09:04.280 "claimed": true, 00:09:04.280 "claim_type": "exclusive_write", 00:09:04.280 "zoned": false, 00:09:04.280 "supported_io_types": { 00:09:04.280 "read": true, 00:09:04.280 "write": true, 00:09:04.280 "unmap": true, 00:09:04.280 "flush": true, 00:09:04.280 "reset": true, 00:09:04.280 "nvme_admin": false, 00:09:04.280 "nvme_io": false, 00:09:04.280 "nvme_io_md": false, 00:09:04.281 "write_zeroes": true, 00:09:04.281 "zcopy": true, 00:09:04.281 "get_zone_info": false, 00:09:04.281 "zone_management": false, 00:09:04.281 "zone_append": false, 00:09:04.281 "compare": false, 00:09:04.281 "compare_and_write": false, 00:09:04.281 "abort": true, 00:09:04.281 "seek_hole": false, 00:09:04.281 "seek_data": false, 00:09:04.281 "copy": true, 00:09:04.281 "nvme_iov_md": false 00:09:04.281 }, 00:09:04.281 "memory_domains": [ 00:09:04.281 { 00:09:04.281 "dma_device_id": "system", 00:09:04.281 "dma_device_type": 1 00:09:04.281 }, 00:09:04.281 { 00:09:04.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.281 "dma_device_type": 2 00:09:04.281 } 00:09:04.281 ], 00:09:04.281 "driver_specific": {} 00:09:04.281 } 00:09:04.281 ] 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.281 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.541 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.541 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.541 "name": "Existed_Raid", 00:09:04.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.541 "strip_size_kb": 64, 00:09:04.541 "state": "configuring", 00:09:04.541 "raid_level": "concat", 00:09:04.541 "superblock": false, 00:09:04.541 "num_base_bdevs": 3, 00:09:04.541 "num_base_bdevs_discovered": 2, 00:09:04.541 "num_base_bdevs_operational": 3, 00:09:04.541 "base_bdevs_list": [ 00:09:04.541 { 00:09:04.541 "name": "BaseBdev1", 00:09:04.541 "uuid": "0d3b18e6-8a8a-4311-b9d8-3578830c5b5a", 00:09:04.541 "is_configured": true, 00:09:04.541 "data_offset": 0, 00:09:04.541 "data_size": 65536 00:09:04.541 }, 00:09:04.541 { 00:09:04.541 "name": "BaseBdev2", 00:09:04.541 "uuid": "3a4a34d9-adfc-4db5-8dff-68b805767ee5", 00:09:04.541 "is_configured": true, 00:09:04.541 "data_offset": 0, 00:09:04.541 "data_size": 65536 00:09:04.541 }, 00:09:04.541 { 00:09:04.541 "name": "BaseBdev3", 00:09:04.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.541 "is_configured": false, 00:09:04.541 "data_offset": 0, 00:09:04.541 "data_size": 0 00:09:04.541 } 00:09:04.541 ] 00:09:04.541 }' 00:09:04.541 08:45:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.541 08:45:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.800 [2024-10-05 08:45:41.224286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.800 [2024-10-05 08:45:41.224340] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:04.800 [2024-10-05 08:45:41.224356] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:04.800 [2024-10-05 08:45:41.224651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:04.800 [2024-10-05 08:45:41.224845] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:04.800 [2024-10-05 08:45:41.224857] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:04.800 [2024-10-05 08:45:41.225170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.800 BaseBdev3 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.800 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.800 [ 00:09:04.800 { 00:09:04.801 "name": "BaseBdev3", 00:09:04.801 "aliases": [ 00:09:04.801 "226a2491-6fc8-4d54-ac8b-0f7f1b110ef4" 00:09:04.801 ], 00:09:04.801 "product_name": "Malloc disk", 00:09:04.801 "block_size": 512, 00:09:04.801 "num_blocks": 65536, 00:09:04.801 "uuid": "226a2491-6fc8-4d54-ac8b-0f7f1b110ef4", 00:09:04.801 "assigned_rate_limits": { 00:09:04.801 "rw_ios_per_sec": 0, 00:09:04.801 "rw_mbytes_per_sec": 0, 00:09:04.801 "r_mbytes_per_sec": 0, 00:09:04.801 "w_mbytes_per_sec": 0 00:09:04.801 }, 00:09:04.801 "claimed": true, 00:09:04.801 "claim_type": "exclusive_write", 00:09:04.801 "zoned": false, 00:09:04.801 "supported_io_types": { 00:09:04.801 "read": true, 00:09:04.801 "write": true, 00:09:04.801 "unmap": true, 00:09:04.801 "flush": true, 00:09:04.801 "reset": true, 00:09:04.801 "nvme_admin": false, 00:09:04.801 "nvme_io": false, 00:09:04.801 "nvme_io_md": false, 00:09:04.801 "write_zeroes": true, 00:09:04.801 "zcopy": true, 00:09:04.801 "get_zone_info": false, 00:09:04.801 "zone_management": false, 00:09:04.801 "zone_append": false, 00:09:04.801 "compare": false, 00:09:04.801 "compare_and_write": false, 00:09:04.801 "abort": true, 00:09:04.801 "seek_hole": false, 00:09:04.801 "seek_data": false, 00:09:04.801 "copy": true, 00:09:04.801 "nvme_iov_md": false 00:09:04.801 }, 00:09:04.801 "memory_domains": [ 00:09:04.801 { 00:09:04.801 "dma_device_id": "system", 00:09:04.801 "dma_device_type": 1 00:09:04.801 }, 00:09:04.801 { 00:09:04.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.801 "dma_device_type": 2 00:09:04.801 } 00:09:04.801 ], 00:09:04.801 "driver_specific": {} 00:09:04.801 } 00:09:04.801 ] 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.801 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.061 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.061 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.061 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.061 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.061 "name": "Existed_Raid", 00:09:05.061 "uuid": "5cf63445-da9f-419d-8f49-212681dbe26d", 00:09:05.061 "strip_size_kb": 64, 00:09:05.061 "state": "online", 00:09:05.061 "raid_level": "concat", 00:09:05.061 "superblock": false, 00:09:05.061 "num_base_bdevs": 3, 00:09:05.061 "num_base_bdevs_discovered": 3, 00:09:05.061 "num_base_bdevs_operational": 3, 00:09:05.061 "base_bdevs_list": [ 00:09:05.061 { 00:09:05.061 "name": "BaseBdev1", 00:09:05.061 "uuid": "0d3b18e6-8a8a-4311-b9d8-3578830c5b5a", 00:09:05.061 "is_configured": true, 00:09:05.061 "data_offset": 0, 00:09:05.061 "data_size": 65536 00:09:05.061 }, 00:09:05.061 { 00:09:05.061 "name": "BaseBdev2", 00:09:05.061 "uuid": "3a4a34d9-adfc-4db5-8dff-68b805767ee5", 00:09:05.061 "is_configured": true, 00:09:05.061 "data_offset": 0, 00:09:05.061 "data_size": 65536 00:09:05.061 }, 00:09:05.061 { 00:09:05.061 "name": "BaseBdev3", 00:09:05.061 "uuid": "226a2491-6fc8-4d54-ac8b-0f7f1b110ef4", 00:09:05.061 "is_configured": true, 00:09:05.061 "data_offset": 0, 00:09:05.061 "data_size": 65536 00:09:05.061 } 00:09:05.061 ] 00:09:05.061 }' 00:09:05.061 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.061 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.321 [2024-10-05 08:45:41.719819] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.321 "name": "Existed_Raid", 00:09:05.321 "aliases": [ 00:09:05.321 "5cf63445-da9f-419d-8f49-212681dbe26d" 00:09:05.321 ], 00:09:05.321 "product_name": "Raid Volume", 00:09:05.321 "block_size": 512, 00:09:05.321 "num_blocks": 196608, 00:09:05.321 "uuid": "5cf63445-da9f-419d-8f49-212681dbe26d", 00:09:05.321 "assigned_rate_limits": { 00:09:05.321 "rw_ios_per_sec": 0, 00:09:05.321 "rw_mbytes_per_sec": 0, 00:09:05.321 "r_mbytes_per_sec": 0, 00:09:05.321 "w_mbytes_per_sec": 0 00:09:05.321 }, 00:09:05.321 "claimed": false, 00:09:05.321 "zoned": false, 00:09:05.321 "supported_io_types": { 00:09:05.321 "read": true, 00:09:05.321 "write": true, 00:09:05.321 "unmap": true, 00:09:05.321 "flush": true, 00:09:05.321 "reset": true, 00:09:05.321 "nvme_admin": false, 00:09:05.321 "nvme_io": false, 00:09:05.321 "nvme_io_md": false, 00:09:05.321 "write_zeroes": true, 00:09:05.321 "zcopy": false, 00:09:05.321 "get_zone_info": false, 00:09:05.321 "zone_management": false, 00:09:05.321 "zone_append": false, 00:09:05.321 "compare": false, 00:09:05.321 "compare_and_write": false, 00:09:05.321 "abort": false, 00:09:05.321 "seek_hole": false, 00:09:05.321 "seek_data": false, 00:09:05.321 "copy": false, 00:09:05.321 "nvme_iov_md": false 00:09:05.321 }, 00:09:05.321 "memory_domains": [ 00:09:05.321 { 00:09:05.321 "dma_device_id": "system", 00:09:05.321 "dma_device_type": 1 00:09:05.321 }, 00:09:05.321 { 00:09:05.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.321 "dma_device_type": 2 00:09:05.321 }, 00:09:05.321 { 00:09:05.321 "dma_device_id": "system", 00:09:05.321 "dma_device_type": 1 00:09:05.321 }, 00:09:05.321 { 00:09:05.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.321 "dma_device_type": 2 00:09:05.321 }, 00:09:05.321 { 00:09:05.321 "dma_device_id": "system", 00:09:05.321 "dma_device_type": 1 00:09:05.321 }, 00:09:05.321 { 00:09:05.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.321 "dma_device_type": 2 00:09:05.321 } 00:09:05.321 ], 00:09:05.321 "driver_specific": { 00:09:05.321 "raid": { 00:09:05.321 "uuid": "5cf63445-da9f-419d-8f49-212681dbe26d", 00:09:05.321 "strip_size_kb": 64, 00:09:05.321 "state": "online", 00:09:05.321 "raid_level": "concat", 00:09:05.321 "superblock": false, 00:09:05.321 "num_base_bdevs": 3, 00:09:05.321 "num_base_bdevs_discovered": 3, 00:09:05.321 "num_base_bdevs_operational": 3, 00:09:05.321 "base_bdevs_list": [ 00:09:05.321 { 00:09:05.321 "name": "BaseBdev1", 00:09:05.321 "uuid": "0d3b18e6-8a8a-4311-b9d8-3578830c5b5a", 00:09:05.321 "is_configured": true, 00:09:05.321 "data_offset": 0, 00:09:05.321 "data_size": 65536 00:09:05.321 }, 00:09:05.321 { 00:09:05.321 "name": "BaseBdev2", 00:09:05.321 "uuid": "3a4a34d9-adfc-4db5-8dff-68b805767ee5", 00:09:05.321 "is_configured": true, 00:09:05.321 "data_offset": 0, 00:09:05.321 "data_size": 65536 00:09:05.321 }, 00:09:05.321 { 00:09:05.321 "name": "BaseBdev3", 00:09:05.321 "uuid": "226a2491-6fc8-4d54-ac8b-0f7f1b110ef4", 00:09:05.321 "is_configured": true, 00:09:05.321 "data_offset": 0, 00:09:05.321 "data_size": 65536 00:09:05.321 } 00:09:05.321 ] 00:09:05.321 } 00:09:05.321 } 00:09:05.321 }' 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:05.321 BaseBdev2 00:09:05.321 BaseBdev3' 00:09:05.321 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.581 08:45:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.581 [2024-10-05 08:45:41.979122] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.581 [2024-10-05 08:45:41.979158] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.581 [2024-10-05 08:45:41.979219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.841 "name": "Existed_Raid", 00:09:05.841 "uuid": "5cf63445-da9f-419d-8f49-212681dbe26d", 00:09:05.841 "strip_size_kb": 64, 00:09:05.841 "state": "offline", 00:09:05.841 "raid_level": "concat", 00:09:05.841 "superblock": false, 00:09:05.841 "num_base_bdevs": 3, 00:09:05.841 "num_base_bdevs_discovered": 2, 00:09:05.841 "num_base_bdevs_operational": 2, 00:09:05.841 "base_bdevs_list": [ 00:09:05.841 { 00:09:05.841 "name": null, 00:09:05.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.841 "is_configured": false, 00:09:05.841 "data_offset": 0, 00:09:05.841 "data_size": 65536 00:09:05.841 }, 00:09:05.841 { 00:09:05.841 "name": "BaseBdev2", 00:09:05.841 "uuid": "3a4a34d9-adfc-4db5-8dff-68b805767ee5", 00:09:05.841 "is_configured": true, 00:09:05.841 "data_offset": 0, 00:09:05.841 "data_size": 65536 00:09:05.841 }, 00:09:05.841 { 00:09:05.841 "name": "BaseBdev3", 00:09:05.841 "uuid": "226a2491-6fc8-4d54-ac8b-0f7f1b110ef4", 00:09:05.841 "is_configured": true, 00:09:05.841 "data_offset": 0, 00:09:05.841 "data_size": 65536 00:09:05.841 } 00:09:05.841 ] 00:09:05.841 }' 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.841 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.101 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:06.101 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.101 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.101 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.101 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.101 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:06.101 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.101 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:06.101 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.101 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:06.101 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.101 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.101 [2024-10-05 08:45:42.502815] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.361 [2024-10-05 08:45:42.665360] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:06.361 [2024-10-05 08:45:42.665463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.361 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.622 BaseBdev2 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.622 [ 00:09:06.622 { 00:09:06.622 "name": "BaseBdev2", 00:09:06.622 "aliases": [ 00:09:06.622 "6e08b52b-51d2-415e-9506-4801bae3ad0e" 00:09:06.622 ], 00:09:06.622 "product_name": "Malloc disk", 00:09:06.622 "block_size": 512, 00:09:06.622 "num_blocks": 65536, 00:09:06.622 "uuid": "6e08b52b-51d2-415e-9506-4801bae3ad0e", 00:09:06.622 "assigned_rate_limits": { 00:09:06.622 "rw_ios_per_sec": 0, 00:09:06.622 "rw_mbytes_per_sec": 0, 00:09:06.622 "r_mbytes_per_sec": 0, 00:09:06.622 "w_mbytes_per_sec": 0 00:09:06.622 }, 00:09:06.622 "claimed": false, 00:09:06.622 "zoned": false, 00:09:06.622 "supported_io_types": { 00:09:06.622 "read": true, 00:09:06.622 "write": true, 00:09:06.622 "unmap": true, 00:09:06.622 "flush": true, 00:09:06.622 "reset": true, 00:09:06.622 "nvme_admin": false, 00:09:06.622 "nvme_io": false, 00:09:06.622 "nvme_io_md": false, 00:09:06.622 "write_zeroes": true, 00:09:06.622 "zcopy": true, 00:09:06.622 "get_zone_info": false, 00:09:06.622 "zone_management": false, 00:09:06.622 "zone_append": false, 00:09:06.622 "compare": false, 00:09:06.622 "compare_and_write": false, 00:09:06.622 "abort": true, 00:09:06.622 "seek_hole": false, 00:09:06.622 "seek_data": false, 00:09:06.622 "copy": true, 00:09:06.622 "nvme_iov_md": false 00:09:06.622 }, 00:09:06.622 "memory_domains": [ 00:09:06.622 { 00:09:06.622 "dma_device_id": "system", 00:09:06.622 "dma_device_type": 1 00:09:06.622 }, 00:09:06.622 { 00:09:06.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.622 "dma_device_type": 2 00:09:06.622 } 00:09:06.622 ], 00:09:06.622 "driver_specific": {} 00:09:06.622 } 00:09:06.622 ] 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.622 BaseBdev3 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.622 [ 00:09:06.622 { 00:09:06.622 "name": "BaseBdev3", 00:09:06.622 "aliases": [ 00:09:06.622 "313550fa-4511-436c-b926-84cf451b15f2" 00:09:06.622 ], 00:09:06.622 "product_name": "Malloc disk", 00:09:06.622 "block_size": 512, 00:09:06.622 "num_blocks": 65536, 00:09:06.622 "uuid": "313550fa-4511-436c-b926-84cf451b15f2", 00:09:06.622 "assigned_rate_limits": { 00:09:06.622 "rw_ios_per_sec": 0, 00:09:06.622 "rw_mbytes_per_sec": 0, 00:09:06.622 "r_mbytes_per_sec": 0, 00:09:06.622 "w_mbytes_per_sec": 0 00:09:06.622 }, 00:09:06.622 "claimed": false, 00:09:06.622 "zoned": false, 00:09:06.622 "supported_io_types": { 00:09:06.622 "read": true, 00:09:06.622 "write": true, 00:09:06.622 "unmap": true, 00:09:06.622 "flush": true, 00:09:06.622 "reset": true, 00:09:06.622 "nvme_admin": false, 00:09:06.622 "nvme_io": false, 00:09:06.622 "nvme_io_md": false, 00:09:06.622 "write_zeroes": true, 00:09:06.622 "zcopy": true, 00:09:06.622 "get_zone_info": false, 00:09:06.622 "zone_management": false, 00:09:06.622 "zone_append": false, 00:09:06.622 "compare": false, 00:09:06.622 "compare_and_write": false, 00:09:06.622 "abort": true, 00:09:06.622 "seek_hole": false, 00:09:06.622 "seek_data": false, 00:09:06.622 "copy": true, 00:09:06.622 "nvme_iov_md": false 00:09:06.622 }, 00:09:06.622 "memory_domains": [ 00:09:06.622 { 00:09:06.622 "dma_device_id": "system", 00:09:06.622 "dma_device_type": 1 00:09:06.622 }, 00:09:06.622 { 00:09:06.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.622 "dma_device_type": 2 00:09:06.622 } 00:09:06.622 ], 00:09:06.622 "driver_specific": {} 00:09:06.622 } 00:09:06.622 ] 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.622 08:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.622 [2024-10-05 08:45:42.999042] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.622 [2024-10-05 08:45:42.999174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.622 [2024-10-05 08:45:42.999217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.623 [2024-10-05 08:45:43.001310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.623 "name": "Existed_Raid", 00:09:06.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.623 "strip_size_kb": 64, 00:09:06.623 "state": "configuring", 00:09:06.623 "raid_level": "concat", 00:09:06.623 "superblock": false, 00:09:06.623 "num_base_bdevs": 3, 00:09:06.623 "num_base_bdevs_discovered": 2, 00:09:06.623 "num_base_bdevs_operational": 3, 00:09:06.623 "base_bdevs_list": [ 00:09:06.623 { 00:09:06.623 "name": "BaseBdev1", 00:09:06.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.623 "is_configured": false, 00:09:06.623 "data_offset": 0, 00:09:06.623 "data_size": 0 00:09:06.623 }, 00:09:06.623 { 00:09:06.623 "name": "BaseBdev2", 00:09:06.623 "uuid": "6e08b52b-51d2-415e-9506-4801bae3ad0e", 00:09:06.623 "is_configured": true, 00:09:06.623 "data_offset": 0, 00:09:06.623 "data_size": 65536 00:09:06.623 }, 00:09:06.623 { 00:09:06.623 "name": "BaseBdev3", 00:09:06.623 "uuid": "313550fa-4511-436c-b926-84cf451b15f2", 00:09:06.623 "is_configured": true, 00:09:06.623 "data_offset": 0, 00:09:06.623 "data_size": 65536 00:09:06.623 } 00:09:06.623 ] 00:09:06.623 }' 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.623 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.190 [2024-10-05 08:45:43.454269] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.190 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.190 "name": "Existed_Raid", 00:09:07.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.190 "strip_size_kb": 64, 00:09:07.190 "state": "configuring", 00:09:07.190 "raid_level": "concat", 00:09:07.190 "superblock": false, 00:09:07.190 "num_base_bdevs": 3, 00:09:07.190 "num_base_bdevs_discovered": 1, 00:09:07.190 "num_base_bdevs_operational": 3, 00:09:07.190 "base_bdevs_list": [ 00:09:07.190 { 00:09:07.190 "name": "BaseBdev1", 00:09:07.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.191 "is_configured": false, 00:09:07.191 "data_offset": 0, 00:09:07.191 "data_size": 0 00:09:07.191 }, 00:09:07.191 { 00:09:07.191 "name": null, 00:09:07.191 "uuid": "6e08b52b-51d2-415e-9506-4801bae3ad0e", 00:09:07.191 "is_configured": false, 00:09:07.191 "data_offset": 0, 00:09:07.191 "data_size": 65536 00:09:07.191 }, 00:09:07.191 { 00:09:07.191 "name": "BaseBdev3", 00:09:07.191 "uuid": "313550fa-4511-436c-b926-84cf451b15f2", 00:09:07.191 "is_configured": true, 00:09:07.191 "data_offset": 0, 00:09:07.191 "data_size": 65536 00:09:07.191 } 00:09:07.191 ] 00:09:07.191 }' 00:09:07.191 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.191 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.450 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.450 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:07.450 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.450 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.450 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.709 [2024-10-05 08:45:43.967707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.709 BaseBdev1 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.709 08:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.709 [ 00:09:07.709 { 00:09:07.709 "name": "BaseBdev1", 00:09:07.709 "aliases": [ 00:09:07.709 "319aa027-cfd6-40b0-9357-2ac803e01a9e" 00:09:07.709 ], 00:09:07.709 "product_name": "Malloc disk", 00:09:07.709 "block_size": 512, 00:09:07.709 "num_blocks": 65536, 00:09:07.709 "uuid": "319aa027-cfd6-40b0-9357-2ac803e01a9e", 00:09:07.709 "assigned_rate_limits": { 00:09:07.709 "rw_ios_per_sec": 0, 00:09:07.709 "rw_mbytes_per_sec": 0, 00:09:07.709 "r_mbytes_per_sec": 0, 00:09:07.709 "w_mbytes_per_sec": 0 00:09:07.709 }, 00:09:07.709 "claimed": true, 00:09:07.709 "claim_type": "exclusive_write", 00:09:07.709 "zoned": false, 00:09:07.709 "supported_io_types": { 00:09:07.709 "read": true, 00:09:07.709 "write": true, 00:09:07.709 "unmap": true, 00:09:07.709 "flush": true, 00:09:07.709 "reset": true, 00:09:07.709 "nvme_admin": false, 00:09:07.709 "nvme_io": false, 00:09:07.709 "nvme_io_md": false, 00:09:07.709 "write_zeroes": true, 00:09:07.709 "zcopy": true, 00:09:07.709 "get_zone_info": false, 00:09:07.709 "zone_management": false, 00:09:07.709 "zone_append": false, 00:09:07.709 "compare": false, 00:09:07.709 "compare_and_write": false, 00:09:07.709 "abort": true, 00:09:07.709 "seek_hole": false, 00:09:07.709 "seek_data": false, 00:09:07.709 "copy": true, 00:09:07.709 "nvme_iov_md": false 00:09:07.709 }, 00:09:07.709 "memory_domains": [ 00:09:07.709 { 00:09:07.709 "dma_device_id": "system", 00:09:07.709 "dma_device_type": 1 00:09:07.709 }, 00:09:07.709 { 00:09:07.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.709 "dma_device_type": 2 00:09:07.709 } 00:09:07.709 ], 00:09:07.709 "driver_specific": {} 00:09:07.709 } 00:09:07.709 ] 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.709 "name": "Existed_Raid", 00:09:07.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.709 "strip_size_kb": 64, 00:09:07.709 "state": "configuring", 00:09:07.709 "raid_level": "concat", 00:09:07.709 "superblock": false, 00:09:07.709 "num_base_bdevs": 3, 00:09:07.709 "num_base_bdevs_discovered": 2, 00:09:07.709 "num_base_bdevs_operational": 3, 00:09:07.709 "base_bdevs_list": [ 00:09:07.709 { 00:09:07.709 "name": "BaseBdev1", 00:09:07.709 "uuid": "319aa027-cfd6-40b0-9357-2ac803e01a9e", 00:09:07.709 "is_configured": true, 00:09:07.709 "data_offset": 0, 00:09:07.709 "data_size": 65536 00:09:07.709 }, 00:09:07.709 { 00:09:07.709 "name": null, 00:09:07.709 "uuid": "6e08b52b-51d2-415e-9506-4801bae3ad0e", 00:09:07.709 "is_configured": false, 00:09:07.709 "data_offset": 0, 00:09:07.709 "data_size": 65536 00:09:07.709 }, 00:09:07.709 { 00:09:07.709 "name": "BaseBdev3", 00:09:07.709 "uuid": "313550fa-4511-436c-b926-84cf451b15f2", 00:09:07.709 "is_configured": true, 00:09:07.709 "data_offset": 0, 00:09:07.709 "data_size": 65536 00:09:07.709 } 00:09:07.709 ] 00:09:07.709 }' 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.709 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.278 [2024-10-05 08:45:44.494896] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.278 "name": "Existed_Raid", 00:09:08.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.278 "strip_size_kb": 64, 00:09:08.278 "state": "configuring", 00:09:08.278 "raid_level": "concat", 00:09:08.278 "superblock": false, 00:09:08.278 "num_base_bdevs": 3, 00:09:08.278 "num_base_bdevs_discovered": 1, 00:09:08.278 "num_base_bdevs_operational": 3, 00:09:08.278 "base_bdevs_list": [ 00:09:08.278 { 00:09:08.278 "name": "BaseBdev1", 00:09:08.278 "uuid": "319aa027-cfd6-40b0-9357-2ac803e01a9e", 00:09:08.278 "is_configured": true, 00:09:08.278 "data_offset": 0, 00:09:08.278 "data_size": 65536 00:09:08.278 }, 00:09:08.278 { 00:09:08.278 "name": null, 00:09:08.278 "uuid": "6e08b52b-51d2-415e-9506-4801bae3ad0e", 00:09:08.278 "is_configured": false, 00:09:08.278 "data_offset": 0, 00:09:08.278 "data_size": 65536 00:09:08.278 }, 00:09:08.278 { 00:09:08.278 "name": null, 00:09:08.278 "uuid": "313550fa-4511-436c-b926-84cf451b15f2", 00:09:08.278 "is_configured": false, 00:09:08.278 "data_offset": 0, 00:09:08.278 "data_size": 65536 00:09:08.278 } 00:09:08.278 ] 00:09:08.278 }' 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.278 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.537 [2024-10-05 08:45:44.986066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.537 08:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.797 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.797 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.798 "name": "Existed_Raid", 00:09:08.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.798 "strip_size_kb": 64, 00:09:08.798 "state": "configuring", 00:09:08.798 "raid_level": "concat", 00:09:08.798 "superblock": false, 00:09:08.798 "num_base_bdevs": 3, 00:09:08.798 "num_base_bdevs_discovered": 2, 00:09:08.798 "num_base_bdevs_operational": 3, 00:09:08.798 "base_bdevs_list": [ 00:09:08.798 { 00:09:08.798 "name": "BaseBdev1", 00:09:08.798 "uuid": "319aa027-cfd6-40b0-9357-2ac803e01a9e", 00:09:08.798 "is_configured": true, 00:09:08.798 "data_offset": 0, 00:09:08.798 "data_size": 65536 00:09:08.798 }, 00:09:08.798 { 00:09:08.798 "name": null, 00:09:08.798 "uuid": "6e08b52b-51d2-415e-9506-4801bae3ad0e", 00:09:08.798 "is_configured": false, 00:09:08.798 "data_offset": 0, 00:09:08.798 "data_size": 65536 00:09:08.798 }, 00:09:08.798 { 00:09:08.798 "name": "BaseBdev3", 00:09:08.798 "uuid": "313550fa-4511-436c-b926-84cf451b15f2", 00:09:08.798 "is_configured": true, 00:09:08.798 "data_offset": 0, 00:09:08.798 "data_size": 65536 00:09:08.798 } 00:09:08.798 ] 00:09:08.798 }' 00:09:08.798 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.798 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.058 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.058 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.058 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.058 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:09.058 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.058 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:09.058 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.058 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.058 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.058 [2024-10-05 08:45:45.477252] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.319 "name": "Existed_Raid", 00:09:09.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.319 "strip_size_kb": 64, 00:09:09.319 "state": "configuring", 00:09:09.319 "raid_level": "concat", 00:09:09.319 "superblock": false, 00:09:09.319 "num_base_bdevs": 3, 00:09:09.319 "num_base_bdevs_discovered": 1, 00:09:09.319 "num_base_bdevs_operational": 3, 00:09:09.319 "base_bdevs_list": [ 00:09:09.319 { 00:09:09.319 "name": null, 00:09:09.319 "uuid": "319aa027-cfd6-40b0-9357-2ac803e01a9e", 00:09:09.319 "is_configured": false, 00:09:09.319 "data_offset": 0, 00:09:09.319 "data_size": 65536 00:09:09.319 }, 00:09:09.319 { 00:09:09.319 "name": null, 00:09:09.319 "uuid": "6e08b52b-51d2-415e-9506-4801bae3ad0e", 00:09:09.319 "is_configured": false, 00:09:09.319 "data_offset": 0, 00:09:09.319 "data_size": 65536 00:09:09.319 }, 00:09:09.319 { 00:09:09.319 "name": "BaseBdev3", 00:09:09.319 "uuid": "313550fa-4511-436c-b926-84cf451b15f2", 00:09:09.319 "is_configured": true, 00:09:09.319 "data_offset": 0, 00:09:09.319 "data_size": 65536 00:09:09.319 } 00:09:09.319 ] 00:09:09.319 }' 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.319 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.582 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.582 08:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:09.582 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.582 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.582 08:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.582 [2024-10-05 08:45:46.021505] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.582 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.842 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.842 "name": "Existed_Raid", 00:09:09.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.842 "strip_size_kb": 64, 00:09:09.842 "state": "configuring", 00:09:09.842 "raid_level": "concat", 00:09:09.842 "superblock": false, 00:09:09.842 "num_base_bdevs": 3, 00:09:09.842 "num_base_bdevs_discovered": 2, 00:09:09.842 "num_base_bdevs_operational": 3, 00:09:09.842 "base_bdevs_list": [ 00:09:09.842 { 00:09:09.842 "name": null, 00:09:09.842 "uuid": "319aa027-cfd6-40b0-9357-2ac803e01a9e", 00:09:09.842 "is_configured": false, 00:09:09.842 "data_offset": 0, 00:09:09.842 "data_size": 65536 00:09:09.842 }, 00:09:09.842 { 00:09:09.842 "name": "BaseBdev2", 00:09:09.842 "uuid": "6e08b52b-51d2-415e-9506-4801bae3ad0e", 00:09:09.842 "is_configured": true, 00:09:09.842 "data_offset": 0, 00:09:09.842 "data_size": 65536 00:09:09.842 }, 00:09:09.842 { 00:09:09.842 "name": "BaseBdev3", 00:09:09.842 "uuid": "313550fa-4511-436c-b926-84cf451b15f2", 00:09:09.842 "is_configured": true, 00:09:09.842 "data_offset": 0, 00:09:09.842 "data_size": 65536 00:09:09.842 } 00:09:09.842 ] 00:09:09.842 }' 00:09:09.842 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.842 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 319aa027-cfd6-40b0-9357-2ac803e01a9e 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.102 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.362 [2024-10-05 08:45:46.605916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:10.362 [2024-10-05 08:45:46.605989] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:10.362 [2024-10-05 08:45:46.606004] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:10.362 [2024-10-05 08:45:46.606278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:10.362 [2024-10-05 08:45:46.606427] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:10.362 [2024-10-05 08:45:46.606435] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:10.362 [2024-10-05 08:45:46.606688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.362 NewBaseBdev 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.362 [ 00:09:10.362 { 00:09:10.362 "name": "NewBaseBdev", 00:09:10.362 "aliases": [ 00:09:10.362 "319aa027-cfd6-40b0-9357-2ac803e01a9e" 00:09:10.362 ], 00:09:10.362 "product_name": "Malloc disk", 00:09:10.362 "block_size": 512, 00:09:10.362 "num_blocks": 65536, 00:09:10.362 "uuid": "319aa027-cfd6-40b0-9357-2ac803e01a9e", 00:09:10.362 "assigned_rate_limits": { 00:09:10.362 "rw_ios_per_sec": 0, 00:09:10.362 "rw_mbytes_per_sec": 0, 00:09:10.362 "r_mbytes_per_sec": 0, 00:09:10.362 "w_mbytes_per_sec": 0 00:09:10.362 }, 00:09:10.362 "claimed": true, 00:09:10.362 "claim_type": "exclusive_write", 00:09:10.362 "zoned": false, 00:09:10.362 "supported_io_types": { 00:09:10.362 "read": true, 00:09:10.362 "write": true, 00:09:10.362 "unmap": true, 00:09:10.362 "flush": true, 00:09:10.362 "reset": true, 00:09:10.362 "nvme_admin": false, 00:09:10.362 "nvme_io": false, 00:09:10.362 "nvme_io_md": false, 00:09:10.362 "write_zeroes": true, 00:09:10.362 "zcopy": true, 00:09:10.362 "get_zone_info": false, 00:09:10.362 "zone_management": false, 00:09:10.362 "zone_append": false, 00:09:10.362 "compare": false, 00:09:10.362 "compare_and_write": false, 00:09:10.362 "abort": true, 00:09:10.362 "seek_hole": false, 00:09:10.362 "seek_data": false, 00:09:10.362 "copy": true, 00:09:10.362 "nvme_iov_md": false 00:09:10.362 }, 00:09:10.362 "memory_domains": [ 00:09:10.362 { 00:09:10.362 "dma_device_id": "system", 00:09:10.362 "dma_device_type": 1 00:09:10.362 }, 00:09:10.362 { 00:09:10.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.362 "dma_device_type": 2 00:09:10.362 } 00:09:10.362 ], 00:09:10.362 "driver_specific": {} 00:09:10.362 } 00:09:10.362 ] 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.362 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.362 "name": "Existed_Raid", 00:09:10.362 "uuid": "fa8fcca7-dd88-4f91-9022-500540ad3408", 00:09:10.362 "strip_size_kb": 64, 00:09:10.362 "state": "online", 00:09:10.362 "raid_level": "concat", 00:09:10.362 "superblock": false, 00:09:10.362 "num_base_bdevs": 3, 00:09:10.362 "num_base_bdevs_discovered": 3, 00:09:10.362 "num_base_bdevs_operational": 3, 00:09:10.362 "base_bdevs_list": [ 00:09:10.362 { 00:09:10.362 "name": "NewBaseBdev", 00:09:10.362 "uuid": "319aa027-cfd6-40b0-9357-2ac803e01a9e", 00:09:10.362 "is_configured": true, 00:09:10.362 "data_offset": 0, 00:09:10.362 "data_size": 65536 00:09:10.362 }, 00:09:10.362 { 00:09:10.362 "name": "BaseBdev2", 00:09:10.363 "uuid": "6e08b52b-51d2-415e-9506-4801bae3ad0e", 00:09:10.363 "is_configured": true, 00:09:10.363 "data_offset": 0, 00:09:10.363 "data_size": 65536 00:09:10.363 }, 00:09:10.363 { 00:09:10.363 "name": "BaseBdev3", 00:09:10.363 "uuid": "313550fa-4511-436c-b926-84cf451b15f2", 00:09:10.363 "is_configured": true, 00:09:10.363 "data_offset": 0, 00:09:10.363 "data_size": 65536 00:09:10.363 } 00:09:10.363 ] 00:09:10.363 }' 00:09:10.363 08:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.363 08:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.934 [2024-10-05 08:45:47.109456] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.934 "name": "Existed_Raid", 00:09:10.934 "aliases": [ 00:09:10.934 "fa8fcca7-dd88-4f91-9022-500540ad3408" 00:09:10.934 ], 00:09:10.934 "product_name": "Raid Volume", 00:09:10.934 "block_size": 512, 00:09:10.934 "num_blocks": 196608, 00:09:10.934 "uuid": "fa8fcca7-dd88-4f91-9022-500540ad3408", 00:09:10.934 "assigned_rate_limits": { 00:09:10.934 "rw_ios_per_sec": 0, 00:09:10.934 "rw_mbytes_per_sec": 0, 00:09:10.934 "r_mbytes_per_sec": 0, 00:09:10.934 "w_mbytes_per_sec": 0 00:09:10.934 }, 00:09:10.934 "claimed": false, 00:09:10.934 "zoned": false, 00:09:10.934 "supported_io_types": { 00:09:10.934 "read": true, 00:09:10.934 "write": true, 00:09:10.934 "unmap": true, 00:09:10.934 "flush": true, 00:09:10.934 "reset": true, 00:09:10.934 "nvme_admin": false, 00:09:10.934 "nvme_io": false, 00:09:10.934 "nvme_io_md": false, 00:09:10.934 "write_zeroes": true, 00:09:10.934 "zcopy": false, 00:09:10.934 "get_zone_info": false, 00:09:10.934 "zone_management": false, 00:09:10.934 "zone_append": false, 00:09:10.934 "compare": false, 00:09:10.934 "compare_and_write": false, 00:09:10.934 "abort": false, 00:09:10.934 "seek_hole": false, 00:09:10.934 "seek_data": false, 00:09:10.934 "copy": false, 00:09:10.934 "nvme_iov_md": false 00:09:10.934 }, 00:09:10.934 "memory_domains": [ 00:09:10.934 { 00:09:10.934 "dma_device_id": "system", 00:09:10.934 "dma_device_type": 1 00:09:10.934 }, 00:09:10.934 { 00:09:10.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.934 "dma_device_type": 2 00:09:10.934 }, 00:09:10.934 { 00:09:10.934 "dma_device_id": "system", 00:09:10.934 "dma_device_type": 1 00:09:10.934 }, 00:09:10.934 { 00:09:10.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.934 "dma_device_type": 2 00:09:10.934 }, 00:09:10.934 { 00:09:10.934 "dma_device_id": "system", 00:09:10.934 "dma_device_type": 1 00:09:10.934 }, 00:09:10.934 { 00:09:10.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.934 "dma_device_type": 2 00:09:10.934 } 00:09:10.934 ], 00:09:10.934 "driver_specific": { 00:09:10.934 "raid": { 00:09:10.934 "uuid": "fa8fcca7-dd88-4f91-9022-500540ad3408", 00:09:10.934 "strip_size_kb": 64, 00:09:10.934 "state": "online", 00:09:10.934 "raid_level": "concat", 00:09:10.934 "superblock": false, 00:09:10.934 "num_base_bdevs": 3, 00:09:10.934 "num_base_bdevs_discovered": 3, 00:09:10.934 "num_base_bdevs_operational": 3, 00:09:10.934 "base_bdevs_list": [ 00:09:10.934 { 00:09:10.934 "name": "NewBaseBdev", 00:09:10.934 "uuid": "319aa027-cfd6-40b0-9357-2ac803e01a9e", 00:09:10.934 "is_configured": true, 00:09:10.934 "data_offset": 0, 00:09:10.934 "data_size": 65536 00:09:10.934 }, 00:09:10.934 { 00:09:10.934 "name": "BaseBdev2", 00:09:10.934 "uuid": "6e08b52b-51d2-415e-9506-4801bae3ad0e", 00:09:10.934 "is_configured": true, 00:09:10.934 "data_offset": 0, 00:09:10.934 "data_size": 65536 00:09:10.934 }, 00:09:10.934 { 00:09:10.934 "name": "BaseBdev3", 00:09:10.934 "uuid": "313550fa-4511-436c-b926-84cf451b15f2", 00:09:10.934 "is_configured": true, 00:09:10.934 "data_offset": 0, 00:09:10.934 "data_size": 65536 00:09:10.934 } 00:09:10.934 ] 00:09:10.934 } 00:09:10.934 } 00:09:10.934 }' 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:10.934 BaseBdev2 00:09:10.934 BaseBdev3' 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.934 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.934 [2024-10-05 08:45:47.376858] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.934 [2024-10-05 08:45:47.376963] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.934 [2024-10-05 08:45:47.377074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.935 [2024-10-05 08:45:47.377151] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.935 [2024-10-05 08:45:47.377199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:10.935 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.935 08:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64891 00:09:10.935 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 64891 ']' 00:09:10.935 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 64891 00:09:10.935 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:10.935 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.935 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64891 00:09:11.195 killing process with pid 64891 00:09:11.195 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.195 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.195 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64891' 00:09:11.195 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 64891 00:09:11.195 [2024-10-05 08:45:47.426323] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.195 08:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 64891 00:09:11.455 [2024-10-05 08:45:47.746783] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.836 ************************************ 00:09:12.836 END TEST raid_state_function_test 00:09:12.836 ************************************ 00:09:12.836 08:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:12.836 00:09:12.836 real 0m10.781s 00:09:12.836 user 0m16.761s 00:09:12.836 sys 0m1.961s 00:09:12.836 08:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.836 08:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.836 08:45:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:12.836 08:45:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:12.836 08:45:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.836 08:45:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.836 ************************************ 00:09:12.836 START TEST raid_state_function_test_sb 00:09:12.836 ************************************ 00:09:12.836 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:12.836 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:12.836 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:12.836 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:12.836 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:12.836 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:12.836 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.836 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=65452 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65452' 00:09:12.837 Process raid pid: 65452 00:09:12.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 65452 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 65452 ']' 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.837 08:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:12.837 [2024-10-05 08:45:49.263531] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:12.837 [2024-10-05 08:45:49.263721] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:13.096 [2024-10-05 08:45:49.429376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.356 [2024-10-05 08:45:49.686239] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.616 [2024-10-05 08:45:49.927687] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.616 [2024-10-05 08:45:49.927819] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.875 [2024-10-05 08:45:50.099774] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.875 [2024-10-05 08:45:50.099833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.875 [2024-10-05 08:45:50.099847] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.875 [2024-10-05 08:45:50.099858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.875 [2024-10-05 08:45:50.099864] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.875 [2024-10-05 08:45:50.099873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.875 "name": "Existed_Raid", 00:09:13.875 "uuid": "57f46c87-f6a1-43a1-b157-7450128bd79a", 00:09:13.875 "strip_size_kb": 64, 00:09:13.875 "state": "configuring", 00:09:13.875 "raid_level": "concat", 00:09:13.875 "superblock": true, 00:09:13.875 "num_base_bdevs": 3, 00:09:13.875 "num_base_bdevs_discovered": 0, 00:09:13.875 "num_base_bdevs_operational": 3, 00:09:13.875 "base_bdevs_list": [ 00:09:13.875 { 00:09:13.875 "name": "BaseBdev1", 00:09:13.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.875 "is_configured": false, 00:09:13.875 "data_offset": 0, 00:09:13.875 "data_size": 0 00:09:13.875 }, 00:09:13.875 { 00:09:13.875 "name": "BaseBdev2", 00:09:13.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.875 "is_configured": false, 00:09:13.875 "data_offset": 0, 00:09:13.875 "data_size": 0 00:09:13.875 }, 00:09:13.875 { 00:09:13.875 "name": "BaseBdev3", 00:09:13.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.875 "is_configured": false, 00:09:13.875 "data_offset": 0, 00:09:13.875 "data_size": 0 00:09:13.875 } 00:09:13.875 ] 00:09:13.875 }' 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.875 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.135 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.135 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.135 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.135 [2024-10-05 08:45:50.550889] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.135 [2024-10-05 08:45:50.550929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:14.135 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.135 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.135 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.135 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.135 [2024-10-05 08:45:50.558923] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.135 [2024-10-05 08:45:50.559014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.135 [2024-10-05 08:45:50.559041] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.135 [2024-10-05 08:45:50.559062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.135 [2024-10-05 08:45:50.559079] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.135 [2024-10-05 08:45:50.559099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.135 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.135 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.135 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.135 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.396 [2024-10-05 08:45:50.635367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.396 BaseBdev1 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.396 [ 00:09:14.396 { 00:09:14.396 "name": "BaseBdev1", 00:09:14.396 "aliases": [ 00:09:14.396 "19e75873-c41e-4f2d-a7bd-24bd44495b27" 00:09:14.396 ], 00:09:14.396 "product_name": "Malloc disk", 00:09:14.396 "block_size": 512, 00:09:14.396 "num_blocks": 65536, 00:09:14.396 "uuid": "19e75873-c41e-4f2d-a7bd-24bd44495b27", 00:09:14.396 "assigned_rate_limits": { 00:09:14.396 "rw_ios_per_sec": 0, 00:09:14.396 "rw_mbytes_per_sec": 0, 00:09:14.396 "r_mbytes_per_sec": 0, 00:09:14.396 "w_mbytes_per_sec": 0 00:09:14.396 }, 00:09:14.396 "claimed": true, 00:09:14.396 "claim_type": "exclusive_write", 00:09:14.396 "zoned": false, 00:09:14.396 "supported_io_types": { 00:09:14.396 "read": true, 00:09:14.396 "write": true, 00:09:14.396 "unmap": true, 00:09:14.396 "flush": true, 00:09:14.396 "reset": true, 00:09:14.396 "nvme_admin": false, 00:09:14.396 "nvme_io": false, 00:09:14.396 "nvme_io_md": false, 00:09:14.396 "write_zeroes": true, 00:09:14.396 "zcopy": true, 00:09:14.396 "get_zone_info": false, 00:09:14.396 "zone_management": false, 00:09:14.396 "zone_append": false, 00:09:14.396 "compare": false, 00:09:14.396 "compare_and_write": false, 00:09:14.396 "abort": true, 00:09:14.396 "seek_hole": false, 00:09:14.396 "seek_data": false, 00:09:14.396 "copy": true, 00:09:14.396 "nvme_iov_md": false 00:09:14.396 }, 00:09:14.396 "memory_domains": [ 00:09:14.396 { 00:09:14.396 "dma_device_id": "system", 00:09:14.396 "dma_device_type": 1 00:09:14.396 }, 00:09:14.396 { 00:09:14.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.396 "dma_device_type": 2 00:09:14.396 } 00:09:14.396 ], 00:09:14.396 "driver_specific": {} 00:09:14.396 } 00:09:14.396 ] 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.396 "name": "Existed_Raid", 00:09:14.396 "uuid": "a29addea-4efa-4573-bbe6-6c0460b03b18", 00:09:14.396 "strip_size_kb": 64, 00:09:14.396 "state": "configuring", 00:09:14.396 "raid_level": "concat", 00:09:14.396 "superblock": true, 00:09:14.396 "num_base_bdevs": 3, 00:09:14.396 "num_base_bdevs_discovered": 1, 00:09:14.396 "num_base_bdevs_operational": 3, 00:09:14.396 "base_bdevs_list": [ 00:09:14.396 { 00:09:14.396 "name": "BaseBdev1", 00:09:14.396 "uuid": "19e75873-c41e-4f2d-a7bd-24bd44495b27", 00:09:14.396 "is_configured": true, 00:09:14.396 "data_offset": 2048, 00:09:14.396 "data_size": 63488 00:09:14.396 }, 00:09:14.396 { 00:09:14.396 "name": "BaseBdev2", 00:09:14.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.396 "is_configured": false, 00:09:14.396 "data_offset": 0, 00:09:14.396 "data_size": 0 00:09:14.396 }, 00:09:14.396 { 00:09:14.396 "name": "BaseBdev3", 00:09:14.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.396 "is_configured": false, 00:09:14.396 "data_offset": 0, 00:09:14.396 "data_size": 0 00:09:14.396 } 00:09:14.396 ] 00:09:14.396 }' 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.396 08:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.656 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.656 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.656 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.656 [2024-10-05 08:45:51.082608] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.656 [2024-10-05 08:45:51.082647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:14.656 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.656 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.656 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.656 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.656 [2024-10-05 08:45:51.090651] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.656 [2024-10-05 08:45:51.092596] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.656 [2024-10-05 08:45:51.092633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.656 [2024-10-05 08:45:51.092642] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.656 [2024-10-05 08:45:51.092650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.656 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.656 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.657 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.917 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.917 "name": "Existed_Raid", 00:09:14.917 "uuid": "95feb557-d491-4cd5-a66e-a057603a4f6d", 00:09:14.917 "strip_size_kb": 64, 00:09:14.917 "state": "configuring", 00:09:14.917 "raid_level": "concat", 00:09:14.917 "superblock": true, 00:09:14.917 "num_base_bdevs": 3, 00:09:14.917 "num_base_bdevs_discovered": 1, 00:09:14.917 "num_base_bdevs_operational": 3, 00:09:14.917 "base_bdevs_list": [ 00:09:14.917 { 00:09:14.917 "name": "BaseBdev1", 00:09:14.917 "uuid": "19e75873-c41e-4f2d-a7bd-24bd44495b27", 00:09:14.917 "is_configured": true, 00:09:14.917 "data_offset": 2048, 00:09:14.917 "data_size": 63488 00:09:14.917 }, 00:09:14.917 { 00:09:14.917 "name": "BaseBdev2", 00:09:14.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.917 "is_configured": false, 00:09:14.917 "data_offset": 0, 00:09:14.917 "data_size": 0 00:09:14.917 }, 00:09:14.917 { 00:09:14.917 "name": "BaseBdev3", 00:09:14.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.917 "is_configured": false, 00:09:14.917 "data_offset": 0, 00:09:14.917 "data_size": 0 00:09:14.917 } 00:09:14.917 ] 00:09:14.917 }' 00:09:14.917 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.917 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.177 [2024-10-05 08:45:51.509762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.177 BaseBdev2 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.177 [ 00:09:15.177 { 00:09:15.177 "name": "BaseBdev2", 00:09:15.177 "aliases": [ 00:09:15.177 "9d42519d-bfd2-4b88-a960-0abb2d87164d" 00:09:15.177 ], 00:09:15.177 "product_name": "Malloc disk", 00:09:15.177 "block_size": 512, 00:09:15.177 "num_blocks": 65536, 00:09:15.177 "uuid": "9d42519d-bfd2-4b88-a960-0abb2d87164d", 00:09:15.177 "assigned_rate_limits": { 00:09:15.177 "rw_ios_per_sec": 0, 00:09:15.177 "rw_mbytes_per_sec": 0, 00:09:15.177 "r_mbytes_per_sec": 0, 00:09:15.177 "w_mbytes_per_sec": 0 00:09:15.177 }, 00:09:15.177 "claimed": true, 00:09:15.177 "claim_type": "exclusive_write", 00:09:15.177 "zoned": false, 00:09:15.177 "supported_io_types": { 00:09:15.177 "read": true, 00:09:15.177 "write": true, 00:09:15.177 "unmap": true, 00:09:15.177 "flush": true, 00:09:15.177 "reset": true, 00:09:15.177 "nvme_admin": false, 00:09:15.177 "nvme_io": false, 00:09:15.177 "nvme_io_md": false, 00:09:15.177 "write_zeroes": true, 00:09:15.177 "zcopy": true, 00:09:15.177 "get_zone_info": false, 00:09:15.177 "zone_management": false, 00:09:15.177 "zone_append": false, 00:09:15.177 "compare": false, 00:09:15.177 "compare_and_write": false, 00:09:15.177 "abort": true, 00:09:15.177 "seek_hole": false, 00:09:15.177 "seek_data": false, 00:09:15.177 "copy": true, 00:09:15.177 "nvme_iov_md": false 00:09:15.177 }, 00:09:15.177 "memory_domains": [ 00:09:15.177 { 00:09:15.177 "dma_device_id": "system", 00:09:15.177 "dma_device_type": 1 00:09:15.177 }, 00:09:15.177 { 00:09:15.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.177 "dma_device_type": 2 00:09:15.177 } 00:09:15.177 ], 00:09:15.177 "driver_specific": {} 00:09:15.177 } 00:09:15.177 ] 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.177 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.177 "name": "Existed_Raid", 00:09:15.177 "uuid": "95feb557-d491-4cd5-a66e-a057603a4f6d", 00:09:15.177 "strip_size_kb": 64, 00:09:15.177 "state": "configuring", 00:09:15.177 "raid_level": "concat", 00:09:15.178 "superblock": true, 00:09:15.178 "num_base_bdevs": 3, 00:09:15.178 "num_base_bdevs_discovered": 2, 00:09:15.178 "num_base_bdevs_operational": 3, 00:09:15.178 "base_bdevs_list": [ 00:09:15.178 { 00:09:15.178 "name": "BaseBdev1", 00:09:15.178 "uuid": "19e75873-c41e-4f2d-a7bd-24bd44495b27", 00:09:15.178 "is_configured": true, 00:09:15.178 "data_offset": 2048, 00:09:15.178 "data_size": 63488 00:09:15.178 }, 00:09:15.178 { 00:09:15.178 "name": "BaseBdev2", 00:09:15.178 "uuid": "9d42519d-bfd2-4b88-a960-0abb2d87164d", 00:09:15.178 "is_configured": true, 00:09:15.178 "data_offset": 2048, 00:09:15.178 "data_size": 63488 00:09:15.178 }, 00:09:15.178 { 00:09:15.178 "name": "BaseBdev3", 00:09:15.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.178 "is_configured": false, 00:09:15.178 "data_offset": 0, 00:09:15.178 "data_size": 0 00:09:15.178 } 00:09:15.178 ] 00:09:15.178 }' 00:09:15.178 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.178 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.747 [2024-10-05 08:45:51.986610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.747 [2024-10-05 08:45:51.986992] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:15.747 [2024-10-05 08:45:51.987059] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.747 [2024-10-05 08:45:51.987355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:15.747 BaseBdev3 00:09:15.747 [2024-10-05 08:45:51.987553] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:15.747 [2024-10-05 08:45:51.987600] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:15.747 [2024-10-05 08:45:51.987786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.747 08:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.747 [ 00:09:15.747 { 00:09:15.747 "name": "BaseBdev3", 00:09:15.747 "aliases": [ 00:09:15.747 "db71324e-dab1-4e3e-ac41-8ccab0e387d3" 00:09:15.747 ], 00:09:15.747 "product_name": "Malloc disk", 00:09:15.747 "block_size": 512, 00:09:15.747 "num_blocks": 65536, 00:09:15.747 "uuid": "db71324e-dab1-4e3e-ac41-8ccab0e387d3", 00:09:15.747 "assigned_rate_limits": { 00:09:15.747 "rw_ios_per_sec": 0, 00:09:15.747 "rw_mbytes_per_sec": 0, 00:09:15.747 "r_mbytes_per_sec": 0, 00:09:15.747 "w_mbytes_per_sec": 0 00:09:15.747 }, 00:09:15.747 "claimed": true, 00:09:15.747 "claim_type": "exclusive_write", 00:09:15.747 "zoned": false, 00:09:15.747 "supported_io_types": { 00:09:15.747 "read": true, 00:09:15.747 "write": true, 00:09:15.747 "unmap": true, 00:09:15.747 "flush": true, 00:09:15.747 "reset": true, 00:09:15.747 "nvme_admin": false, 00:09:15.747 "nvme_io": false, 00:09:15.747 "nvme_io_md": false, 00:09:15.747 "write_zeroes": true, 00:09:15.747 "zcopy": true, 00:09:15.747 "get_zone_info": false, 00:09:15.747 "zone_management": false, 00:09:15.747 "zone_append": false, 00:09:15.747 "compare": false, 00:09:15.747 "compare_and_write": false, 00:09:15.747 "abort": true, 00:09:15.747 "seek_hole": false, 00:09:15.747 "seek_data": false, 00:09:15.747 "copy": true, 00:09:15.747 "nvme_iov_md": false 00:09:15.747 }, 00:09:15.747 "memory_domains": [ 00:09:15.747 { 00:09:15.747 "dma_device_id": "system", 00:09:15.747 "dma_device_type": 1 00:09:15.747 }, 00:09:15.747 { 00:09:15.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.747 "dma_device_type": 2 00:09:15.747 } 00:09:15.747 ], 00:09:15.747 "driver_specific": {} 00:09:15.747 } 00:09:15.747 ] 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.747 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.748 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.748 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.748 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.748 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.748 "name": "Existed_Raid", 00:09:15.748 "uuid": "95feb557-d491-4cd5-a66e-a057603a4f6d", 00:09:15.748 "strip_size_kb": 64, 00:09:15.748 "state": "online", 00:09:15.748 "raid_level": "concat", 00:09:15.748 "superblock": true, 00:09:15.748 "num_base_bdevs": 3, 00:09:15.748 "num_base_bdevs_discovered": 3, 00:09:15.748 "num_base_bdevs_operational": 3, 00:09:15.748 "base_bdevs_list": [ 00:09:15.748 { 00:09:15.748 "name": "BaseBdev1", 00:09:15.748 "uuid": "19e75873-c41e-4f2d-a7bd-24bd44495b27", 00:09:15.748 "is_configured": true, 00:09:15.748 "data_offset": 2048, 00:09:15.748 "data_size": 63488 00:09:15.748 }, 00:09:15.748 { 00:09:15.748 "name": "BaseBdev2", 00:09:15.748 "uuid": "9d42519d-bfd2-4b88-a960-0abb2d87164d", 00:09:15.748 "is_configured": true, 00:09:15.748 "data_offset": 2048, 00:09:15.748 "data_size": 63488 00:09:15.748 }, 00:09:15.748 { 00:09:15.748 "name": "BaseBdev3", 00:09:15.748 "uuid": "db71324e-dab1-4e3e-ac41-8ccab0e387d3", 00:09:15.748 "is_configured": true, 00:09:15.748 "data_offset": 2048, 00:09:15.748 "data_size": 63488 00:09:15.748 } 00:09:15.748 ] 00:09:15.748 }' 00:09:15.748 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.748 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.008 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.008 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.009 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.009 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.009 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.009 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.009 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.009 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.009 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.009 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.009 [2024-10-05 08:45:52.458131] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.268 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.268 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.268 "name": "Existed_Raid", 00:09:16.268 "aliases": [ 00:09:16.268 "95feb557-d491-4cd5-a66e-a057603a4f6d" 00:09:16.268 ], 00:09:16.268 "product_name": "Raid Volume", 00:09:16.268 "block_size": 512, 00:09:16.268 "num_blocks": 190464, 00:09:16.269 "uuid": "95feb557-d491-4cd5-a66e-a057603a4f6d", 00:09:16.269 "assigned_rate_limits": { 00:09:16.269 "rw_ios_per_sec": 0, 00:09:16.269 "rw_mbytes_per_sec": 0, 00:09:16.269 "r_mbytes_per_sec": 0, 00:09:16.269 "w_mbytes_per_sec": 0 00:09:16.269 }, 00:09:16.269 "claimed": false, 00:09:16.269 "zoned": false, 00:09:16.269 "supported_io_types": { 00:09:16.269 "read": true, 00:09:16.269 "write": true, 00:09:16.269 "unmap": true, 00:09:16.269 "flush": true, 00:09:16.269 "reset": true, 00:09:16.269 "nvme_admin": false, 00:09:16.269 "nvme_io": false, 00:09:16.269 "nvme_io_md": false, 00:09:16.269 "write_zeroes": true, 00:09:16.269 "zcopy": false, 00:09:16.269 "get_zone_info": false, 00:09:16.269 "zone_management": false, 00:09:16.269 "zone_append": false, 00:09:16.269 "compare": false, 00:09:16.269 "compare_and_write": false, 00:09:16.269 "abort": false, 00:09:16.269 "seek_hole": false, 00:09:16.269 "seek_data": false, 00:09:16.269 "copy": false, 00:09:16.269 "nvme_iov_md": false 00:09:16.269 }, 00:09:16.269 "memory_domains": [ 00:09:16.269 { 00:09:16.269 "dma_device_id": "system", 00:09:16.269 "dma_device_type": 1 00:09:16.269 }, 00:09:16.269 { 00:09:16.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.269 "dma_device_type": 2 00:09:16.269 }, 00:09:16.269 { 00:09:16.269 "dma_device_id": "system", 00:09:16.269 "dma_device_type": 1 00:09:16.269 }, 00:09:16.269 { 00:09:16.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.269 "dma_device_type": 2 00:09:16.269 }, 00:09:16.269 { 00:09:16.269 "dma_device_id": "system", 00:09:16.269 "dma_device_type": 1 00:09:16.269 }, 00:09:16.269 { 00:09:16.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.269 "dma_device_type": 2 00:09:16.269 } 00:09:16.269 ], 00:09:16.269 "driver_specific": { 00:09:16.269 "raid": { 00:09:16.269 "uuid": "95feb557-d491-4cd5-a66e-a057603a4f6d", 00:09:16.269 "strip_size_kb": 64, 00:09:16.269 "state": "online", 00:09:16.269 "raid_level": "concat", 00:09:16.269 "superblock": true, 00:09:16.269 "num_base_bdevs": 3, 00:09:16.269 "num_base_bdevs_discovered": 3, 00:09:16.269 "num_base_bdevs_operational": 3, 00:09:16.269 "base_bdevs_list": [ 00:09:16.269 { 00:09:16.269 "name": "BaseBdev1", 00:09:16.269 "uuid": "19e75873-c41e-4f2d-a7bd-24bd44495b27", 00:09:16.269 "is_configured": true, 00:09:16.269 "data_offset": 2048, 00:09:16.269 "data_size": 63488 00:09:16.269 }, 00:09:16.269 { 00:09:16.269 "name": "BaseBdev2", 00:09:16.269 "uuid": "9d42519d-bfd2-4b88-a960-0abb2d87164d", 00:09:16.269 "is_configured": true, 00:09:16.269 "data_offset": 2048, 00:09:16.269 "data_size": 63488 00:09:16.269 }, 00:09:16.269 { 00:09:16.269 "name": "BaseBdev3", 00:09:16.269 "uuid": "db71324e-dab1-4e3e-ac41-8ccab0e387d3", 00:09:16.269 "is_configured": true, 00:09:16.269 "data_offset": 2048, 00:09:16.269 "data_size": 63488 00:09:16.269 } 00:09:16.269 ] 00:09:16.269 } 00:09:16.269 } 00:09:16.269 }' 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.269 BaseBdev2 00:09:16.269 BaseBdev3' 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.269 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.269 [2024-10-05 08:45:52.737367] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.269 [2024-10-05 08:45:52.737439] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.269 [2024-10-05 08:45:52.737511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.529 "name": "Existed_Raid", 00:09:16.529 "uuid": "95feb557-d491-4cd5-a66e-a057603a4f6d", 00:09:16.529 "strip_size_kb": 64, 00:09:16.529 "state": "offline", 00:09:16.529 "raid_level": "concat", 00:09:16.529 "superblock": true, 00:09:16.529 "num_base_bdevs": 3, 00:09:16.529 "num_base_bdevs_discovered": 2, 00:09:16.529 "num_base_bdevs_operational": 2, 00:09:16.529 "base_bdevs_list": [ 00:09:16.529 { 00:09:16.529 "name": null, 00:09:16.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.529 "is_configured": false, 00:09:16.529 "data_offset": 0, 00:09:16.529 "data_size": 63488 00:09:16.529 }, 00:09:16.529 { 00:09:16.529 "name": "BaseBdev2", 00:09:16.529 "uuid": "9d42519d-bfd2-4b88-a960-0abb2d87164d", 00:09:16.529 "is_configured": true, 00:09:16.529 "data_offset": 2048, 00:09:16.529 "data_size": 63488 00:09:16.529 }, 00:09:16.529 { 00:09:16.529 "name": "BaseBdev3", 00:09:16.529 "uuid": "db71324e-dab1-4e3e-ac41-8ccab0e387d3", 00:09:16.529 "is_configured": true, 00:09:16.529 "data_offset": 2048, 00:09:16.529 "data_size": 63488 00:09:16.529 } 00:09:16.529 ] 00:09:16.529 }' 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.529 08:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.790 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:16.790 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.050 [2024-10-05 08:45:53.311171] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.050 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.050 [2024-10-05 08:45:53.468335] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:17.050 [2024-10-05 08:45:53.468464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:17.310 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.310 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.311 BaseBdev2 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.311 [ 00:09:17.311 { 00:09:17.311 "name": "BaseBdev2", 00:09:17.311 "aliases": [ 00:09:17.311 "df17ea69-6610-4b15-b68f-59df05ff4bba" 00:09:17.311 ], 00:09:17.311 "product_name": "Malloc disk", 00:09:17.311 "block_size": 512, 00:09:17.311 "num_blocks": 65536, 00:09:17.311 "uuid": "df17ea69-6610-4b15-b68f-59df05ff4bba", 00:09:17.311 "assigned_rate_limits": { 00:09:17.311 "rw_ios_per_sec": 0, 00:09:17.311 "rw_mbytes_per_sec": 0, 00:09:17.311 "r_mbytes_per_sec": 0, 00:09:17.311 "w_mbytes_per_sec": 0 00:09:17.311 }, 00:09:17.311 "claimed": false, 00:09:17.311 "zoned": false, 00:09:17.311 "supported_io_types": { 00:09:17.311 "read": true, 00:09:17.311 "write": true, 00:09:17.311 "unmap": true, 00:09:17.311 "flush": true, 00:09:17.311 "reset": true, 00:09:17.311 "nvme_admin": false, 00:09:17.311 "nvme_io": false, 00:09:17.311 "nvme_io_md": false, 00:09:17.311 "write_zeroes": true, 00:09:17.311 "zcopy": true, 00:09:17.311 "get_zone_info": false, 00:09:17.311 "zone_management": false, 00:09:17.311 "zone_append": false, 00:09:17.311 "compare": false, 00:09:17.311 "compare_and_write": false, 00:09:17.311 "abort": true, 00:09:17.311 "seek_hole": false, 00:09:17.311 "seek_data": false, 00:09:17.311 "copy": true, 00:09:17.311 "nvme_iov_md": false 00:09:17.311 }, 00:09:17.311 "memory_domains": [ 00:09:17.311 { 00:09:17.311 "dma_device_id": "system", 00:09:17.311 "dma_device_type": 1 00:09:17.311 }, 00:09:17.311 { 00:09:17.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.311 "dma_device_type": 2 00:09:17.311 } 00:09:17.311 ], 00:09:17.311 "driver_specific": {} 00:09:17.311 } 00:09:17.311 ] 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.311 BaseBdev3 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.311 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.311 [ 00:09:17.311 { 00:09:17.311 "name": "BaseBdev3", 00:09:17.311 "aliases": [ 00:09:17.311 "6bec4a91-5173-4da7-896d-a5fe870b4e78" 00:09:17.311 ], 00:09:17.311 "product_name": "Malloc disk", 00:09:17.311 "block_size": 512, 00:09:17.311 "num_blocks": 65536, 00:09:17.311 "uuid": "6bec4a91-5173-4da7-896d-a5fe870b4e78", 00:09:17.311 "assigned_rate_limits": { 00:09:17.311 "rw_ios_per_sec": 0, 00:09:17.311 "rw_mbytes_per_sec": 0, 00:09:17.311 "r_mbytes_per_sec": 0, 00:09:17.311 "w_mbytes_per_sec": 0 00:09:17.311 }, 00:09:17.311 "claimed": false, 00:09:17.311 "zoned": false, 00:09:17.311 "supported_io_types": { 00:09:17.311 "read": true, 00:09:17.311 "write": true, 00:09:17.311 "unmap": true, 00:09:17.311 "flush": true, 00:09:17.311 "reset": true, 00:09:17.311 "nvme_admin": false, 00:09:17.312 "nvme_io": false, 00:09:17.312 "nvme_io_md": false, 00:09:17.312 "write_zeroes": true, 00:09:17.312 "zcopy": true, 00:09:17.312 "get_zone_info": false, 00:09:17.312 "zone_management": false, 00:09:17.312 "zone_append": false, 00:09:17.312 "compare": false, 00:09:17.312 "compare_and_write": false, 00:09:17.312 "abort": true, 00:09:17.312 "seek_hole": false, 00:09:17.312 "seek_data": false, 00:09:17.312 "copy": true, 00:09:17.312 "nvme_iov_md": false 00:09:17.312 }, 00:09:17.312 "memory_domains": [ 00:09:17.312 { 00:09:17.312 "dma_device_id": "system", 00:09:17.312 "dma_device_type": 1 00:09:17.312 }, 00:09:17.312 { 00:09:17.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.312 "dma_device_type": 2 00:09:17.312 } 00:09:17.312 ], 00:09:17.312 "driver_specific": {} 00:09:17.312 } 00:09:17.312 ] 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.312 [2024-10-05 08:45:53.759995] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.312 [2024-10-05 08:45:53.760115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.312 [2024-10-05 08:45:53.760154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.312 [2024-10-05 08:45:53.762119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.312 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.571 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.571 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.571 "name": "Existed_Raid", 00:09:17.571 "uuid": "d1c37c27-f050-4ea5-84d4-80346ac9b4b1", 00:09:17.571 "strip_size_kb": 64, 00:09:17.571 "state": "configuring", 00:09:17.571 "raid_level": "concat", 00:09:17.571 "superblock": true, 00:09:17.571 "num_base_bdevs": 3, 00:09:17.571 "num_base_bdevs_discovered": 2, 00:09:17.571 "num_base_bdevs_operational": 3, 00:09:17.571 "base_bdevs_list": [ 00:09:17.571 { 00:09:17.571 "name": "BaseBdev1", 00:09:17.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.571 "is_configured": false, 00:09:17.571 "data_offset": 0, 00:09:17.571 "data_size": 0 00:09:17.571 }, 00:09:17.571 { 00:09:17.571 "name": "BaseBdev2", 00:09:17.571 "uuid": "df17ea69-6610-4b15-b68f-59df05ff4bba", 00:09:17.571 "is_configured": true, 00:09:17.571 "data_offset": 2048, 00:09:17.571 "data_size": 63488 00:09:17.571 }, 00:09:17.571 { 00:09:17.571 "name": "BaseBdev3", 00:09:17.571 "uuid": "6bec4a91-5173-4da7-896d-a5fe870b4e78", 00:09:17.571 "is_configured": true, 00:09:17.571 "data_offset": 2048, 00:09:17.571 "data_size": 63488 00:09:17.571 } 00:09:17.571 ] 00:09:17.571 }' 00:09:17.571 08:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.571 08:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.831 [2024-10-05 08:45:54.183199] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.831 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.832 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.832 "name": "Existed_Raid", 00:09:17.832 "uuid": "d1c37c27-f050-4ea5-84d4-80346ac9b4b1", 00:09:17.832 "strip_size_kb": 64, 00:09:17.832 "state": "configuring", 00:09:17.832 "raid_level": "concat", 00:09:17.832 "superblock": true, 00:09:17.832 "num_base_bdevs": 3, 00:09:17.832 "num_base_bdevs_discovered": 1, 00:09:17.832 "num_base_bdevs_operational": 3, 00:09:17.832 "base_bdevs_list": [ 00:09:17.832 { 00:09:17.832 "name": "BaseBdev1", 00:09:17.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.832 "is_configured": false, 00:09:17.832 "data_offset": 0, 00:09:17.832 "data_size": 0 00:09:17.832 }, 00:09:17.832 { 00:09:17.832 "name": null, 00:09:17.832 "uuid": "df17ea69-6610-4b15-b68f-59df05ff4bba", 00:09:17.832 "is_configured": false, 00:09:17.832 "data_offset": 0, 00:09:17.832 "data_size": 63488 00:09:17.832 }, 00:09:17.832 { 00:09:17.832 "name": "BaseBdev3", 00:09:17.832 "uuid": "6bec4a91-5173-4da7-896d-a5fe870b4e78", 00:09:17.832 "is_configured": true, 00:09:17.832 "data_offset": 2048, 00:09:17.832 "data_size": 63488 00:09:17.832 } 00:09:17.832 ] 00:09:17.832 }' 00:09:17.832 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.832 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.409 [2024-10-05 08:45:54.677038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.409 BaseBdev1 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.409 [ 00:09:18.409 { 00:09:18.409 "name": "BaseBdev1", 00:09:18.409 "aliases": [ 00:09:18.409 "f8b42ddf-29dd-46ef-8ab4-6d7783d36c7f" 00:09:18.409 ], 00:09:18.409 "product_name": "Malloc disk", 00:09:18.409 "block_size": 512, 00:09:18.409 "num_blocks": 65536, 00:09:18.409 "uuid": "f8b42ddf-29dd-46ef-8ab4-6d7783d36c7f", 00:09:18.409 "assigned_rate_limits": { 00:09:18.409 "rw_ios_per_sec": 0, 00:09:18.409 "rw_mbytes_per_sec": 0, 00:09:18.409 "r_mbytes_per_sec": 0, 00:09:18.409 "w_mbytes_per_sec": 0 00:09:18.409 }, 00:09:18.409 "claimed": true, 00:09:18.409 "claim_type": "exclusive_write", 00:09:18.409 "zoned": false, 00:09:18.409 "supported_io_types": { 00:09:18.409 "read": true, 00:09:18.409 "write": true, 00:09:18.409 "unmap": true, 00:09:18.409 "flush": true, 00:09:18.409 "reset": true, 00:09:18.409 "nvme_admin": false, 00:09:18.409 "nvme_io": false, 00:09:18.409 "nvme_io_md": false, 00:09:18.409 "write_zeroes": true, 00:09:18.409 "zcopy": true, 00:09:18.409 "get_zone_info": false, 00:09:18.409 "zone_management": false, 00:09:18.409 "zone_append": false, 00:09:18.409 "compare": false, 00:09:18.409 "compare_and_write": false, 00:09:18.409 "abort": true, 00:09:18.409 "seek_hole": false, 00:09:18.409 "seek_data": false, 00:09:18.409 "copy": true, 00:09:18.409 "nvme_iov_md": false 00:09:18.409 }, 00:09:18.409 "memory_domains": [ 00:09:18.409 { 00:09:18.409 "dma_device_id": "system", 00:09:18.409 "dma_device_type": 1 00:09:18.409 }, 00:09:18.409 { 00:09:18.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.409 "dma_device_type": 2 00:09:18.409 } 00:09:18.409 ], 00:09:18.409 "driver_specific": {} 00:09:18.409 } 00:09:18.409 ] 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.409 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.410 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.410 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.410 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.410 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.410 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.410 "name": "Existed_Raid", 00:09:18.410 "uuid": "d1c37c27-f050-4ea5-84d4-80346ac9b4b1", 00:09:18.410 "strip_size_kb": 64, 00:09:18.410 "state": "configuring", 00:09:18.410 "raid_level": "concat", 00:09:18.410 "superblock": true, 00:09:18.410 "num_base_bdevs": 3, 00:09:18.410 "num_base_bdevs_discovered": 2, 00:09:18.410 "num_base_bdevs_operational": 3, 00:09:18.410 "base_bdevs_list": [ 00:09:18.410 { 00:09:18.410 "name": "BaseBdev1", 00:09:18.410 "uuid": "f8b42ddf-29dd-46ef-8ab4-6d7783d36c7f", 00:09:18.410 "is_configured": true, 00:09:18.410 "data_offset": 2048, 00:09:18.410 "data_size": 63488 00:09:18.410 }, 00:09:18.410 { 00:09:18.410 "name": null, 00:09:18.410 "uuid": "df17ea69-6610-4b15-b68f-59df05ff4bba", 00:09:18.410 "is_configured": false, 00:09:18.410 "data_offset": 0, 00:09:18.410 "data_size": 63488 00:09:18.410 }, 00:09:18.410 { 00:09:18.410 "name": "BaseBdev3", 00:09:18.410 "uuid": "6bec4a91-5173-4da7-896d-a5fe870b4e78", 00:09:18.410 "is_configured": true, 00:09:18.410 "data_offset": 2048, 00:09:18.410 "data_size": 63488 00:09:18.410 } 00:09:18.410 ] 00:09:18.410 }' 00:09:18.410 08:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.410 08:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.978 [2024-10-05 08:45:55.216185] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.978 "name": "Existed_Raid", 00:09:18.978 "uuid": "d1c37c27-f050-4ea5-84d4-80346ac9b4b1", 00:09:18.978 "strip_size_kb": 64, 00:09:18.978 "state": "configuring", 00:09:18.978 "raid_level": "concat", 00:09:18.978 "superblock": true, 00:09:18.978 "num_base_bdevs": 3, 00:09:18.978 "num_base_bdevs_discovered": 1, 00:09:18.978 "num_base_bdevs_operational": 3, 00:09:18.978 "base_bdevs_list": [ 00:09:18.978 { 00:09:18.978 "name": "BaseBdev1", 00:09:18.978 "uuid": "f8b42ddf-29dd-46ef-8ab4-6d7783d36c7f", 00:09:18.978 "is_configured": true, 00:09:18.978 "data_offset": 2048, 00:09:18.978 "data_size": 63488 00:09:18.978 }, 00:09:18.978 { 00:09:18.978 "name": null, 00:09:18.978 "uuid": "df17ea69-6610-4b15-b68f-59df05ff4bba", 00:09:18.978 "is_configured": false, 00:09:18.978 "data_offset": 0, 00:09:18.978 "data_size": 63488 00:09:18.978 }, 00:09:18.978 { 00:09:18.978 "name": null, 00:09:18.978 "uuid": "6bec4a91-5173-4da7-896d-a5fe870b4e78", 00:09:18.978 "is_configured": false, 00:09:18.978 "data_offset": 0, 00:09:18.978 "data_size": 63488 00:09:18.978 } 00:09:18.978 ] 00:09:18.978 }' 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.978 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.238 [2024-10-05 08:45:55.691380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.238 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.498 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.498 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.498 "name": "Existed_Raid", 00:09:19.498 "uuid": "d1c37c27-f050-4ea5-84d4-80346ac9b4b1", 00:09:19.498 "strip_size_kb": 64, 00:09:19.498 "state": "configuring", 00:09:19.498 "raid_level": "concat", 00:09:19.498 "superblock": true, 00:09:19.498 "num_base_bdevs": 3, 00:09:19.498 "num_base_bdevs_discovered": 2, 00:09:19.498 "num_base_bdevs_operational": 3, 00:09:19.498 "base_bdevs_list": [ 00:09:19.498 { 00:09:19.498 "name": "BaseBdev1", 00:09:19.498 "uuid": "f8b42ddf-29dd-46ef-8ab4-6d7783d36c7f", 00:09:19.498 "is_configured": true, 00:09:19.498 "data_offset": 2048, 00:09:19.498 "data_size": 63488 00:09:19.498 }, 00:09:19.498 { 00:09:19.498 "name": null, 00:09:19.498 "uuid": "df17ea69-6610-4b15-b68f-59df05ff4bba", 00:09:19.498 "is_configured": false, 00:09:19.498 "data_offset": 0, 00:09:19.498 "data_size": 63488 00:09:19.498 }, 00:09:19.498 { 00:09:19.498 "name": "BaseBdev3", 00:09:19.498 "uuid": "6bec4a91-5173-4da7-896d-a5fe870b4e78", 00:09:19.498 "is_configured": true, 00:09:19.498 "data_offset": 2048, 00:09:19.498 "data_size": 63488 00:09:19.498 } 00:09:19.498 ] 00:09:19.498 }' 00:09:19.498 08:45:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.498 08:45:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.759 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.759 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.759 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.759 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.759 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.759 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:19.759 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.759 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.759 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.759 [2024-10-05 08:45:56.174598] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.019 "name": "Existed_Raid", 00:09:20.019 "uuid": "d1c37c27-f050-4ea5-84d4-80346ac9b4b1", 00:09:20.019 "strip_size_kb": 64, 00:09:20.019 "state": "configuring", 00:09:20.019 "raid_level": "concat", 00:09:20.019 "superblock": true, 00:09:20.019 "num_base_bdevs": 3, 00:09:20.019 "num_base_bdevs_discovered": 1, 00:09:20.019 "num_base_bdevs_operational": 3, 00:09:20.019 "base_bdevs_list": [ 00:09:20.019 { 00:09:20.019 "name": null, 00:09:20.019 "uuid": "f8b42ddf-29dd-46ef-8ab4-6d7783d36c7f", 00:09:20.019 "is_configured": false, 00:09:20.019 "data_offset": 0, 00:09:20.019 "data_size": 63488 00:09:20.019 }, 00:09:20.019 { 00:09:20.019 "name": null, 00:09:20.019 "uuid": "df17ea69-6610-4b15-b68f-59df05ff4bba", 00:09:20.019 "is_configured": false, 00:09:20.019 "data_offset": 0, 00:09:20.019 "data_size": 63488 00:09:20.019 }, 00:09:20.019 { 00:09:20.019 "name": "BaseBdev3", 00:09:20.019 "uuid": "6bec4a91-5173-4da7-896d-a5fe870b4e78", 00:09:20.019 "is_configured": true, 00:09:20.019 "data_offset": 2048, 00:09:20.019 "data_size": 63488 00:09:20.019 } 00:09:20.019 ] 00:09:20.019 }' 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.019 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.279 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.279 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:20.279 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.279 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.279 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.539 [2024-10-05 08:45:56.762056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.539 "name": "Existed_Raid", 00:09:20.539 "uuid": "d1c37c27-f050-4ea5-84d4-80346ac9b4b1", 00:09:20.539 "strip_size_kb": 64, 00:09:20.539 "state": "configuring", 00:09:20.539 "raid_level": "concat", 00:09:20.539 "superblock": true, 00:09:20.539 "num_base_bdevs": 3, 00:09:20.539 "num_base_bdevs_discovered": 2, 00:09:20.539 "num_base_bdevs_operational": 3, 00:09:20.539 "base_bdevs_list": [ 00:09:20.539 { 00:09:20.539 "name": null, 00:09:20.539 "uuid": "f8b42ddf-29dd-46ef-8ab4-6d7783d36c7f", 00:09:20.539 "is_configured": false, 00:09:20.539 "data_offset": 0, 00:09:20.539 "data_size": 63488 00:09:20.539 }, 00:09:20.539 { 00:09:20.539 "name": "BaseBdev2", 00:09:20.539 "uuid": "df17ea69-6610-4b15-b68f-59df05ff4bba", 00:09:20.539 "is_configured": true, 00:09:20.539 "data_offset": 2048, 00:09:20.539 "data_size": 63488 00:09:20.539 }, 00:09:20.539 { 00:09:20.539 "name": "BaseBdev3", 00:09:20.539 "uuid": "6bec4a91-5173-4da7-896d-a5fe870b4e78", 00:09:20.539 "is_configured": true, 00:09:20.539 "data_offset": 2048, 00:09:20.539 "data_size": 63488 00:09:20.539 } 00:09:20.539 ] 00:09:20.539 }' 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.539 08:45:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.799 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.799 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.799 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.799 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:20.799 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.799 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:20.799 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:20.799 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.799 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.799 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.059 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.059 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f8b42ddf-29dd-46ef-8ab4-6d7783d36c7f 00:09:21.059 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.059 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.059 [2024-10-05 08:45:57.346372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:21.059 [2024-10-05 08:45:57.346680] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:21.059 [2024-10-05 08:45:57.346736] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:21.059 [2024-10-05 08:45:57.347038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:21.059 [2024-10-05 08:45:57.347228] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:21.059 [2024-10-05 08:45:57.347264] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:21.059 NewBaseBdev 00:09:21.059 [2024-10-05 08:45:57.347436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.059 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.059 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:21.059 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:21.059 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.059 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:21.059 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.059 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.060 [ 00:09:21.060 { 00:09:21.060 "name": "NewBaseBdev", 00:09:21.060 "aliases": [ 00:09:21.060 "f8b42ddf-29dd-46ef-8ab4-6d7783d36c7f" 00:09:21.060 ], 00:09:21.060 "product_name": "Malloc disk", 00:09:21.060 "block_size": 512, 00:09:21.060 "num_blocks": 65536, 00:09:21.060 "uuid": "f8b42ddf-29dd-46ef-8ab4-6d7783d36c7f", 00:09:21.060 "assigned_rate_limits": { 00:09:21.060 "rw_ios_per_sec": 0, 00:09:21.060 "rw_mbytes_per_sec": 0, 00:09:21.060 "r_mbytes_per_sec": 0, 00:09:21.060 "w_mbytes_per_sec": 0 00:09:21.060 }, 00:09:21.060 "claimed": true, 00:09:21.060 "claim_type": "exclusive_write", 00:09:21.060 "zoned": false, 00:09:21.060 "supported_io_types": { 00:09:21.060 "read": true, 00:09:21.060 "write": true, 00:09:21.060 "unmap": true, 00:09:21.060 "flush": true, 00:09:21.060 "reset": true, 00:09:21.060 "nvme_admin": false, 00:09:21.060 "nvme_io": false, 00:09:21.060 "nvme_io_md": false, 00:09:21.060 "write_zeroes": true, 00:09:21.060 "zcopy": true, 00:09:21.060 "get_zone_info": false, 00:09:21.060 "zone_management": false, 00:09:21.060 "zone_append": false, 00:09:21.060 "compare": false, 00:09:21.060 "compare_and_write": false, 00:09:21.060 "abort": true, 00:09:21.060 "seek_hole": false, 00:09:21.060 "seek_data": false, 00:09:21.060 "copy": true, 00:09:21.060 "nvme_iov_md": false 00:09:21.060 }, 00:09:21.060 "memory_domains": [ 00:09:21.060 { 00:09:21.060 "dma_device_id": "system", 00:09:21.060 "dma_device_type": 1 00:09:21.060 }, 00:09:21.060 { 00:09:21.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.060 "dma_device_type": 2 00:09:21.060 } 00:09:21.060 ], 00:09:21.060 "driver_specific": {} 00:09:21.060 } 00:09:21.060 ] 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.060 "name": "Existed_Raid", 00:09:21.060 "uuid": "d1c37c27-f050-4ea5-84d4-80346ac9b4b1", 00:09:21.060 "strip_size_kb": 64, 00:09:21.060 "state": "online", 00:09:21.060 "raid_level": "concat", 00:09:21.060 "superblock": true, 00:09:21.060 "num_base_bdevs": 3, 00:09:21.060 "num_base_bdevs_discovered": 3, 00:09:21.060 "num_base_bdevs_operational": 3, 00:09:21.060 "base_bdevs_list": [ 00:09:21.060 { 00:09:21.060 "name": "NewBaseBdev", 00:09:21.060 "uuid": "f8b42ddf-29dd-46ef-8ab4-6d7783d36c7f", 00:09:21.060 "is_configured": true, 00:09:21.060 "data_offset": 2048, 00:09:21.060 "data_size": 63488 00:09:21.060 }, 00:09:21.060 { 00:09:21.060 "name": "BaseBdev2", 00:09:21.060 "uuid": "df17ea69-6610-4b15-b68f-59df05ff4bba", 00:09:21.060 "is_configured": true, 00:09:21.060 "data_offset": 2048, 00:09:21.060 "data_size": 63488 00:09:21.060 }, 00:09:21.060 { 00:09:21.060 "name": "BaseBdev3", 00:09:21.060 "uuid": "6bec4a91-5173-4da7-896d-a5fe870b4e78", 00:09:21.060 "is_configured": true, 00:09:21.060 "data_offset": 2048, 00:09:21.060 "data_size": 63488 00:09:21.060 } 00:09:21.060 ] 00:09:21.060 }' 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.060 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.630 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.631 [2024-10-05 08:45:57.829878] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.631 "name": "Existed_Raid", 00:09:21.631 "aliases": [ 00:09:21.631 "d1c37c27-f050-4ea5-84d4-80346ac9b4b1" 00:09:21.631 ], 00:09:21.631 "product_name": "Raid Volume", 00:09:21.631 "block_size": 512, 00:09:21.631 "num_blocks": 190464, 00:09:21.631 "uuid": "d1c37c27-f050-4ea5-84d4-80346ac9b4b1", 00:09:21.631 "assigned_rate_limits": { 00:09:21.631 "rw_ios_per_sec": 0, 00:09:21.631 "rw_mbytes_per_sec": 0, 00:09:21.631 "r_mbytes_per_sec": 0, 00:09:21.631 "w_mbytes_per_sec": 0 00:09:21.631 }, 00:09:21.631 "claimed": false, 00:09:21.631 "zoned": false, 00:09:21.631 "supported_io_types": { 00:09:21.631 "read": true, 00:09:21.631 "write": true, 00:09:21.631 "unmap": true, 00:09:21.631 "flush": true, 00:09:21.631 "reset": true, 00:09:21.631 "nvme_admin": false, 00:09:21.631 "nvme_io": false, 00:09:21.631 "nvme_io_md": false, 00:09:21.631 "write_zeroes": true, 00:09:21.631 "zcopy": false, 00:09:21.631 "get_zone_info": false, 00:09:21.631 "zone_management": false, 00:09:21.631 "zone_append": false, 00:09:21.631 "compare": false, 00:09:21.631 "compare_and_write": false, 00:09:21.631 "abort": false, 00:09:21.631 "seek_hole": false, 00:09:21.631 "seek_data": false, 00:09:21.631 "copy": false, 00:09:21.631 "nvme_iov_md": false 00:09:21.631 }, 00:09:21.631 "memory_domains": [ 00:09:21.631 { 00:09:21.631 "dma_device_id": "system", 00:09:21.631 "dma_device_type": 1 00:09:21.631 }, 00:09:21.631 { 00:09:21.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.631 "dma_device_type": 2 00:09:21.631 }, 00:09:21.631 { 00:09:21.631 "dma_device_id": "system", 00:09:21.631 "dma_device_type": 1 00:09:21.631 }, 00:09:21.631 { 00:09:21.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.631 "dma_device_type": 2 00:09:21.631 }, 00:09:21.631 { 00:09:21.631 "dma_device_id": "system", 00:09:21.631 "dma_device_type": 1 00:09:21.631 }, 00:09:21.631 { 00:09:21.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.631 "dma_device_type": 2 00:09:21.631 } 00:09:21.631 ], 00:09:21.631 "driver_specific": { 00:09:21.631 "raid": { 00:09:21.631 "uuid": "d1c37c27-f050-4ea5-84d4-80346ac9b4b1", 00:09:21.631 "strip_size_kb": 64, 00:09:21.631 "state": "online", 00:09:21.631 "raid_level": "concat", 00:09:21.631 "superblock": true, 00:09:21.631 "num_base_bdevs": 3, 00:09:21.631 "num_base_bdevs_discovered": 3, 00:09:21.631 "num_base_bdevs_operational": 3, 00:09:21.631 "base_bdevs_list": [ 00:09:21.631 { 00:09:21.631 "name": "NewBaseBdev", 00:09:21.631 "uuid": "f8b42ddf-29dd-46ef-8ab4-6d7783d36c7f", 00:09:21.631 "is_configured": true, 00:09:21.631 "data_offset": 2048, 00:09:21.631 "data_size": 63488 00:09:21.631 }, 00:09:21.631 { 00:09:21.631 "name": "BaseBdev2", 00:09:21.631 "uuid": "df17ea69-6610-4b15-b68f-59df05ff4bba", 00:09:21.631 "is_configured": true, 00:09:21.631 "data_offset": 2048, 00:09:21.631 "data_size": 63488 00:09:21.631 }, 00:09:21.631 { 00:09:21.631 "name": "BaseBdev3", 00:09:21.631 "uuid": "6bec4a91-5173-4da7-896d-a5fe870b4e78", 00:09:21.631 "is_configured": true, 00:09:21.631 "data_offset": 2048, 00:09:21.631 "data_size": 63488 00:09:21.631 } 00:09:21.631 ] 00:09:21.631 } 00:09:21.631 } 00:09:21.631 }' 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:21.631 BaseBdev2 00:09:21.631 BaseBdev3' 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.631 08:45:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.631 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.631 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.631 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.631 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.631 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.631 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.631 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.631 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.631 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.631 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.632 [2024-10-05 08:45:58.093086] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.632 [2024-10-05 08:45:58.093151] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.632 [2024-10-05 08:45:58.093247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.632 [2024-10-05 08:45:58.093328] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.632 [2024-10-05 08:45:58.093374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 65452 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 65452 ']' 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 65452 00:09:21.632 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:21.891 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.891 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65452 00:09:21.891 killing process with pid 65452 00:09:21.891 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.891 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.891 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65452' 00:09:21.891 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 65452 00:09:21.891 [2024-10-05 08:45:58.140617] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.892 08:45:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 65452 00:09:22.153 [2024-10-05 08:45:58.456768] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.534 08:45:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:23.534 00:09:23.534 real 0m10.618s 00:09:23.534 user 0m16.586s 00:09:23.534 sys 0m1.953s 00:09:23.534 ************************************ 00:09:23.534 END TEST raid_state_function_test_sb 00:09:23.534 ************************************ 00:09:23.534 08:45:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.534 08:45:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.534 08:45:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:23.534 08:45:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:23.534 08:45:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.534 08:45:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.534 ************************************ 00:09:23.534 START TEST raid_superblock_test 00:09:23.534 ************************************ 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66012 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66012 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 66012 ']' 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.534 08:45:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.534 [2024-10-05 08:45:59.959282] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:23.534 [2024-10-05 08:45:59.959489] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66012 ] 00:09:23.793 [2024-10-05 08:46:00.130381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.052 [2024-10-05 08:46:00.368720] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.313 [2024-10-05 08:46:00.596672] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.313 [2024-10-05 08:46:00.596714] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.313 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.574 malloc1 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.574 [2024-10-05 08:46:00.835893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:24.574 [2024-10-05 08:46:00.836060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.574 [2024-10-05 08:46:00.836110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:24.574 [2024-10-05 08:46:00.836142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.574 [2024-10-05 08:46:00.838540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.574 [2024-10-05 08:46:00.838612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:24.574 pt1 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.574 malloc2 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.574 [2024-10-05 08:46:00.923775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.574 [2024-10-05 08:46:00.923891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.574 [2024-10-05 08:46:00.923932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:24.574 [2024-10-05 08:46:00.923978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.574 [2024-10-05 08:46:00.926321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.574 [2024-10-05 08:46:00.926391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.574 pt2 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.574 malloc3 00:09:24.574 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.575 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:24.575 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.575 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.575 [2024-10-05 08:46:00.984554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:24.575 [2024-10-05 08:46:00.984650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.575 [2024-10-05 08:46:00.984693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:24.575 [2024-10-05 08:46:00.984719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.575 [2024-10-05 08:46:00.987046] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.575 [2024-10-05 08:46:00.987113] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:24.575 pt3 00:09:24.575 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.575 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:24.575 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:24.575 08:46:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:24.575 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.575 08:46:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.575 [2024-10-05 08:46:00.996616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:24.575 [2024-10-05 08:46:00.998655] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.575 [2024-10-05 08:46:00.998755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:24.575 [2024-10-05 08:46:00.998929] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:24.575 [2024-10-05 08:46:00.998995] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:24.575 [2024-10-05 08:46:00.999243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:24.575 [2024-10-05 08:46:00.999451] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:24.575 [2024-10-05 08:46:00.999491] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:24.575 [2024-10-05 08:46:00.999676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.575 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.835 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.835 "name": "raid_bdev1", 00:09:24.835 "uuid": "e3110d9f-cec7-4134-a653-233a15604891", 00:09:24.835 "strip_size_kb": 64, 00:09:24.835 "state": "online", 00:09:24.835 "raid_level": "concat", 00:09:24.835 "superblock": true, 00:09:24.835 "num_base_bdevs": 3, 00:09:24.835 "num_base_bdevs_discovered": 3, 00:09:24.835 "num_base_bdevs_operational": 3, 00:09:24.835 "base_bdevs_list": [ 00:09:24.835 { 00:09:24.835 "name": "pt1", 00:09:24.835 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.835 "is_configured": true, 00:09:24.835 "data_offset": 2048, 00:09:24.835 "data_size": 63488 00:09:24.835 }, 00:09:24.835 { 00:09:24.835 "name": "pt2", 00:09:24.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.835 "is_configured": true, 00:09:24.835 "data_offset": 2048, 00:09:24.835 "data_size": 63488 00:09:24.835 }, 00:09:24.835 { 00:09:24.835 "name": "pt3", 00:09:24.835 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.835 "is_configured": true, 00:09:24.835 "data_offset": 2048, 00:09:24.835 "data_size": 63488 00:09:24.835 } 00:09:24.835 ] 00:09:24.835 }' 00:09:24.835 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.835 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.095 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.096 [2024-10-05 08:46:01.460086] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.096 "name": "raid_bdev1", 00:09:25.096 "aliases": [ 00:09:25.096 "e3110d9f-cec7-4134-a653-233a15604891" 00:09:25.096 ], 00:09:25.096 "product_name": "Raid Volume", 00:09:25.096 "block_size": 512, 00:09:25.096 "num_blocks": 190464, 00:09:25.096 "uuid": "e3110d9f-cec7-4134-a653-233a15604891", 00:09:25.096 "assigned_rate_limits": { 00:09:25.096 "rw_ios_per_sec": 0, 00:09:25.096 "rw_mbytes_per_sec": 0, 00:09:25.096 "r_mbytes_per_sec": 0, 00:09:25.096 "w_mbytes_per_sec": 0 00:09:25.096 }, 00:09:25.096 "claimed": false, 00:09:25.096 "zoned": false, 00:09:25.096 "supported_io_types": { 00:09:25.096 "read": true, 00:09:25.096 "write": true, 00:09:25.096 "unmap": true, 00:09:25.096 "flush": true, 00:09:25.096 "reset": true, 00:09:25.096 "nvme_admin": false, 00:09:25.096 "nvme_io": false, 00:09:25.096 "nvme_io_md": false, 00:09:25.096 "write_zeroes": true, 00:09:25.096 "zcopy": false, 00:09:25.096 "get_zone_info": false, 00:09:25.096 "zone_management": false, 00:09:25.096 "zone_append": false, 00:09:25.096 "compare": false, 00:09:25.096 "compare_and_write": false, 00:09:25.096 "abort": false, 00:09:25.096 "seek_hole": false, 00:09:25.096 "seek_data": false, 00:09:25.096 "copy": false, 00:09:25.096 "nvme_iov_md": false 00:09:25.096 }, 00:09:25.096 "memory_domains": [ 00:09:25.096 { 00:09:25.096 "dma_device_id": "system", 00:09:25.096 "dma_device_type": 1 00:09:25.096 }, 00:09:25.096 { 00:09:25.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.096 "dma_device_type": 2 00:09:25.096 }, 00:09:25.096 { 00:09:25.096 "dma_device_id": "system", 00:09:25.096 "dma_device_type": 1 00:09:25.096 }, 00:09:25.096 { 00:09:25.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.096 "dma_device_type": 2 00:09:25.096 }, 00:09:25.096 { 00:09:25.096 "dma_device_id": "system", 00:09:25.096 "dma_device_type": 1 00:09:25.096 }, 00:09:25.096 { 00:09:25.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.096 "dma_device_type": 2 00:09:25.096 } 00:09:25.096 ], 00:09:25.096 "driver_specific": { 00:09:25.096 "raid": { 00:09:25.096 "uuid": "e3110d9f-cec7-4134-a653-233a15604891", 00:09:25.096 "strip_size_kb": 64, 00:09:25.096 "state": "online", 00:09:25.096 "raid_level": "concat", 00:09:25.096 "superblock": true, 00:09:25.096 "num_base_bdevs": 3, 00:09:25.096 "num_base_bdevs_discovered": 3, 00:09:25.096 "num_base_bdevs_operational": 3, 00:09:25.096 "base_bdevs_list": [ 00:09:25.096 { 00:09:25.096 "name": "pt1", 00:09:25.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.096 "is_configured": true, 00:09:25.096 "data_offset": 2048, 00:09:25.096 "data_size": 63488 00:09:25.096 }, 00:09:25.096 { 00:09:25.096 "name": "pt2", 00:09:25.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.096 "is_configured": true, 00:09:25.096 "data_offset": 2048, 00:09:25.096 "data_size": 63488 00:09:25.096 }, 00:09:25.096 { 00:09:25.096 "name": "pt3", 00:09:25.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.096 "is_configured": true, 00:09:25.096 "data_offset": 2048, 00:09:25.096 "data_size": 63488 00:09:25.096 } 00:09:25.096 ] 00:09:25.096 } 00:09:25.096 } 00:09:25.096 }' 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:25.096 pt2 00:09:25.096 pt3' 00:09:25.096 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.357 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.358 [2024-10-05 08:46:01.763443] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e3110d9f-cec7-4134-a653-233a15604891 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e3110d9f-cec7-4134-a653-233a15604891 ']' 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.358 [2024-10-05 08:46:01.811120] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.358 [2024-10-05 08:46:01.811188] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.358 [2024-10-05 08:46:01.811281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.358 [2024-10-05 08:46:01.811363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.358 [2024-10-05 08:46:01.811411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.358 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:25.618 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.619 [2024-10-05 08:46:01.962903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:25.619 [2024-10-05 08:46:01.965090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:25.619 [2024-10-05 08:46:01.965140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:25.619 [2024-10-05 08:46:01.965192] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:25.619 [2024-10-05 08:46:01.965238] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:25.619 [2024-10-05 08:46:01.965256] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:25.619 [2024-10-05 08:46:01.965272] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.619 [2024-10-05 08:46:01.965282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:25.619 request: 00:09:25.619 { 00:09:25.619 "name": "raid_bdev1", 00:09:25.619 "raid_level": "concat", 00:09:25.619 "base_bdevs": [ 00:09:25.619 "malloc1", 00:09:25.619 "malloc2", 00:09:25.619 "malloc3" 00:09:25.619 ], 00:09:25.619 "strip_size_kb": 64, 00:09:25.619 "superblock": false, 00:09:25.619 "method": "bdev_raid_create", 00:09:25.619 "req_id": 1 00:09:25.619 } 00:09:25.619 Got JSON-RPC error response 00:09:25.619 response: 00:09:25.619 { 00:09:25.619 "code": -17, 00:09:25.619 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:25.619 } 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:25.619 08:46:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.619 [2024-10-05 08:46:02.026758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:25.619 [2024-10-05 08:46:02.026842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.619 [2024-10-05 08:46:02.026877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:25.619 [2024-10-05 08:46:02.026905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.619 [2024-10-05 08:46:02.029260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.619 [2024-10-05 08:46:02.029326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:25.619 [2024-10-05 08:46:02.029416] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:25.619 [2024-10-05 08:46:02.029483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:25.619 pt1 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.619 "name": "raid_bdev1", 00:09:25.619 "uuid": "e3110d9f-cec7-4134-a653-233a15604891", 00:09:25.619 "strip_size_kb": 64, 00:09:25.619 "state": "configuring", 00:09:25.619 "raid_level": "concat", 00:09:25.619 "superblock": true, 00:09:25.619 "num_base_bdevs": 3, 00:09:25.619 "num_base_bdevs_discovered": 1, 00:09:25.619 "num_base_bdevs_operational": 3, 00:09:25.619 "base_bdevs_list": [ 00:09:25.619 { 00:09:25.619 "name": "pt1", 00:09:25.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.619 "is_configured": true, 00:09:25.619 "data_offset": 2048, 00:09:25.619 "data_size": 63488 00:09:25.619 }, 00:09:25.619 { 00:09:25.619 "name": null, 00:09:25.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.619 "is_configured": false, 00:09:25.619 "data_offset": 2048, 00:09:25.619 "data_size": 63488 00:09:25.619 }, 00:09:25.619 { 00:09:25.619 "name": null, 00:09:25.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.619 "is_configured": false, 00:09:25.619 "data_offset": 2048, 00:09:25.619 "data_size": 63488 00:09:25.619 } 00:09:25.619 ] 00:09:25.619 }' 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.619 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.189 [2024-10-05 08:46:02.505969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:26.189 [2024-10-05 08:46:02.506117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.189 [2024-10-05 08:46:02.506162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:26.189 [2024-10-05 08:46:02.506190] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.189 [2024-10-05 08:46:02.506646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.189 [2024-10-05 08:46:02.506669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:26.189 [2024-10-05 08:46:02.506750] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:26.189 [2024-10-05 08:46:02.506771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:26.189 pt2 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.189 [2024-10-05 08:46:02.517941] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.189 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.190 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.190 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.190 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.190 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.190 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.190 "name": "raid_bdev1", 00:09:26.190 "uuid": "e3110d9f-cec7-4134-a653-233a15604891", 00:09:26.190 "strip_size_kb": 64, 00:09:26.190 "state": "configuring", 00:09:26.190 "raid_level": "concat", 00:09:26.190 "superblock": true, 00:09:26.190 "num_base_bdevs": 3, 00:09:26.190 "num_base_bdevs_discovered": 1, 00:09:26.190 "num_base_bdevs_operational": 3, 00:09:26.190 "base_bdevs_list": [ 00:09:26.190 { 00:09:26.190 "name": "pt1", 00:09:26.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.190 "is_configured": true, 00:09:26.190 "data_offset": 2048, 00:09:26.190 "data_size": 63488 00:09:26.190 }, 00:09:26.190 { 00:09:26.190 "name": null, 00:09:26.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.190 "is_configured": false, 00:09:26.190 "data_offset": 0, 00:09:26.190 "data_size": 63488 00:09:26.190 }, 00:09:26.190 { 00:09:26.190 "name": null, 00:09:26.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.190 "is_configured": false, 00:09:26.190 "data_offset": 2048, 00:09:26.190 "data_size": 63488 00:09:26.190 } 00:09:26.190 ] 00:09:26.190 }' 00:09:26.190 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.190 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.449 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:26.449 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.449 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:26.449 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.449 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.710 [2024-10-05 08:46:02.925206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:26.710 [2024-10-05 08:46:02.925322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.710 [2024-10-05 08:46:02.925358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:26.710 [2024-10-05 08:46:02.925393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.710 [2024-10-05 08:46:02.925884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.710 [2024-10-05 08:46:02.925950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:26.710 [2024-10-05 08:46:02.926083] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:26.710 [2024-10-05 08:46:02.926154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:26.710 pt2 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.710 [2024-10-05 08:46:02.937201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:26.710 [2024-10-05 08:46:02.937281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:26.710 [2024-10-05 08:46:02.937311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:26.710 [2024-10-05 08:46:02.937339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:26.710 [2024-10-05 08:46:02.937720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:26.710 [2024-10-05 08:46:02.937781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:26.710 [2024-10-05 08:46:02.937867] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:26.710 [2024-10-05 08:46:02.937912] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:26.710 [2024-10-05 08:46:02.938059] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:26.710 [2024-10-05 08:46:02.938100] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:26.710 [2024-10-05 08:46:02.938377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:26.710 [2024-10-05 08:46:02.938553] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:26.710 [2024-10-05 08:46:02.938588] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:26.710 [2024-10-05 08:46:02.938749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.710 pt3 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.710 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.710 "name": "raid_bdev1", 00:09:26.710 "uuid": "e3110d9f-cec7-4134-a653-233a15604891", 00:09:26.710 "strip_size_kb": 64, 00:09:26.710 "state": "online", 00:09:26.710 "raid_level": "concat", 00:09:26.710 "superblock": true, 00:09:26.710 "num_base_bdevs": 3, 00:09:26.710 "num_base_bdevs_discovered": 3, 00:09:26.710 "num_base_bdevs_operational": 3, 00:09:26.710 "base_bdevs_list": [ 00:09:26.710 { 00:09:26.710 "name": "pt1", 00:09:26.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.710 "is_configured": true, 00:09:26.710 "data_offset": 2048, 00:09:26.710 "data_size": 63488 00:09:26.710 }, 00:09:26.710 { 00:09:26.710 "name": "pt2", 00:09:26.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.710 "is_configured": true, 00:09:26.711 "data_offset": 2048, 00:09:26.711 "data_size": 63488 00:09:26.711 }, 00:09:26.711 { 00:09:26.711 "name": "pt3", 00:09:26.711 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.711 "is_configured": true, 00:09:26.711 "data_offset": 2048, 00:09:26.711 "data_size": 63488 00:09:26.711 } 00:09:26.711 ] 00:09:26.711 }' 00:09:26.711 08:46:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.711 08:46:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.971 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:26.971 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:26.971 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.971 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.971 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.971 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.971 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:26.971 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.971 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.971 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.971 [2024-10-05 08:46:03.364898] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.971 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.971 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.971 "name": "raid_bdev1", 00:09:26.971 "aliases": [ 00:09:26.971 "e3110d9f-cec7-4134-a653-233a15604891" 00:09:26.971 ], 00:09:26.971 "product_name": "Raid Volume", 00:09:26.971 "block_size": 512, 00:09:26.971 "num_blocks": 190464, 00:09:26.971 "uuid": "e3110d9f-cec7-4134-a653-233a15604891", 00:09:26.971 "assigned_rate_limits": { 00:09:26.971 "rw_ios_per_sec": 0, 00:09:26.971 "rw_mbytes_per_sec": 0, 00:09:26.971 "r_mbytes_per_sec": 0, 00:09:26.971 "w_mbytes_per_sec": 0 00:09:26.971 }, 00:09:26.971 "claimed": false, 00:09:26.971 "zoned": false, 00:09:26.971 "supported_io_types": { 00:09:26.971 "read": true, 00:09:26.971 "write": true, 00:09:26.971 "unmap": true, 00:09:26.971 "flush": true, 00:09:26.971 "reset": true, 00:09:26.971 "nvme_admin": false, 00:09:26.971 "nvme_io": false, 00:09:26.971 "nvme_io_md": false, 00:09:26.971 "write_zeroes": true, 00:09:26.971 "zcopy": false, 00:09:26.971 "get_zone_info": false, 00:09:26.971 "zone_management": false, 00:09:26.971 "zone_append": false, 00:09:26.971 "compare": false, 00:09:26.971 "compare_and_write": false, 00:09:26.971 "abort": false, 00:09:26.971 "seek_hole": false, 00:09:26.971 "seek_data": false, 00:09:26.971 "copy": false, 00:09:26.971 "nvme_iov_md": false 00:09:26.971 }, 00:09:26.971 "memory_domains": [ 00:09:26.971 { 00:09:26.971 "dma_device_id": "system", 00:09:26.971 "dma_device_type": 1 00:09:26.971 }, 00:09:26.971 { 00:09:26.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.971 "dma_device_type": 2 00:09:26.971 }, 00:09:26.971 { 00:09:26.971 "dma_device_id": "system", 00:09:26.971 "dma_device_type": 1 00:09:26.971 }, 00:09:26.971 { 00:09:26.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.971 "dma_device_type": 2 00:09:26.971 }, 00:09:26.971 { 00:09:26.971 "dma_device_id": "system", 00:09:26.971 "dma_device_type": 1 00:09:26.971 }, 00:09:26.971 { 00:09:26.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.971 "dma_device_type": 2 00:09:26.971 } 00:09:26.972 ], 00:09:26.972 "driver_specific": { 00:09:26.972 "raid": { 00:09:26.972 "uuid": "e3110d9f-cec7-4134-a653-233a15604891", 00:09:26.972 "strip_size_kb": 64, 00:09:26.972 "state": "online", 00:09:26.972 "raid_level": "concat", 00:09:26.972 "superblock": true, 00:09:26.972 "num_base_bdevs": 3, 00:09:26.972 "num_base_bdevs_discovered": 3, 00:09:26.972 "num_base_bdevs_operational": 3, 00:09:26.972 "base_bdevs_list": [ 00:09:26.972 { 00:09:26.972 "name": "pt1", 00:09:26.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:26.972 "is_configured": true, 00:09:26.972 "data_offset": 2048, 00:09:26.972 "data_size": 63488 00:09:26.972 }, 00:09:26.972 { 00:09:26.972 "name": "pt2", 00:09:26.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:26.972 "is_configured": true, 00:09:26.972 "data_offset": 2048, 00:09:26.972 "data_size": 63488 00:09:26.972 }, 00:09:26.972 { 00:09:26.972 "name": "pt3", 00:09:26.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:26.972 "is_configured": true, 00:09:26.972 "data_offset": 2048, 00:09:26.972 "data_size": 63488 00:09:26.972 } 00:09:26.972 ] 00:09:26.972 } 00:09:26.972 } 00:09:26.972 }' 00:09:26.972 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:26.972 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:26.972 pt2 00:09:26.972 pt3' 00:09:26.972 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.232 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.233 [2024-10-05 08:46:03.620347] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e3110d9f-cec7-4134-a653-233a15604891 '!=' e3110d9f-cec7-4134-a653-233a15604891 ']' 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66012 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 66012 ']' 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 66012 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66012 00:09:27.233 killing process with pid 66012 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66012' 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 66012 00:09:27.233 [2024-10-05 08:46:03.696129] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.233 08:46:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 66012 00:09:27.233 [2024-10-05 08:46:03.696219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.233 [2024-10-05 08:46:03.696280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.233 [2024-10-05 08:46:03.696296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:27.802 [2024-10-05 08:46:04.024111] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.187 08:46:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:29.187 00:09:29.187 real 0m5.499s 00:09:29.187 user 0m7.592s 00:09:29.187 sys 0m1.039s 00:09:29.187 08:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.187 08:46:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.187 ************************************ 00:09:29.187 END TEST raid_superblock_test 00:09:29.187 ************************************ 00:09:29.187 08:46:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:29.187 08:46:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:29.187 08:46:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.187 08:46:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.187 ************************************ 00:09:29.187 START TEST raid_read_error_test 00:09:29.187 ************************************ 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xV4nrEwCrQ 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66235 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66235 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 66235 ']' 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.187 08:46:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.187 [2024-10-05 08:46:05.547899] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:29.187 [2024-10-05 08:46:05.548171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66235 ] 00:09:29.454 [2024-10-05 08:46:05.717355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.714 [2024-10-05 08:46:05.975371] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.973 [2024-10-05 08:46:06.207701] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.973 [2024-10-05 08:46:06.207825] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.973 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.974 BaseBdev1_malloc 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.974 true 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.974 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.974 [2024-10-05 08:46:06.443764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:29.974 [2024-10-05 08:46:06.443906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.974 [2024-10-05 08:46:06.443939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:29.974 [2024-10-05 08:46:06.443992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.234 [2024-10-05 08:46:06.446448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.234 [2024-10-05 08:46:06.446525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:30.234 BaseBdev1 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 BaseBdev2_malloc 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 true 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 [2024-10-05 08:46:06.543672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:30.234 [2024-10-05 08:46:06.543797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.234 [2024-10-05 08:46:06.543832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:30.234 [2024-10-05 08:46:06.543863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.234 [2024-10-05 08:46:06.546226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.234 [2024-10-05 08:46:06.546301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:30.234 BaseBdev2 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 BaseBdev3_malloc 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 true 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 [2024-10-05 08:46:06.616584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:30.234 [2024-10-05 08:46:06.616638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.234 [2024-10-05 08:46:06.616654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:30.234 [2024-10-05 08:46:06.616665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.234 [2024-10-05 08:46:06.618958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.234 [2024-10-05 08:46:06.619009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:30.234 BaseBdev3 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 [2024-10-05 08:46:06.628644] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.234 [2024-10-05 08:46:06.630647] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.234 [2024-10-05 08:46:06.630791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.234 [2024-10-05 08:46:06.630994] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:30.234 [2024-10-05 08:46:06.631007] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:30.234 [2024-10-05 08:46:06.631244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:30.234 [2024-10-05 08:46:06.631386] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:30.234 [2024-10-05 08:46:06.631397] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:30.234 [2024-10-05 08:46:06.631525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.234 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.235 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.235 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.235 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.235 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.235 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.235 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.235 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.235 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.235 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.235 "name": "raid_bdev1", 00:09:30.235 "uuid": "269a2a56-e0e1-4da2-883f-dbb538a5c5be", 00:09:30.235 "strip_size_kb": 64, 00:09:30.235 "state": "online", 00:09:30.235 "raid_level": "concat", 00:09:30.235 "superblock": true, 00:09:30.235 "num_base_bdevs": 3, 00:09:30.235 "num_base_bdevs_discovered": 3, 00:09:30.235 "num_base_bdevs_operational": 3, 00:09:30.235 "base_bdevs_list": [ 00:09:30.235 { 00:09:30.235 "name": "BaseBdev1", 00:09:30.235 "uuid": "493abe35-d21d-56fa-8c65-a281b6579e58", 00:09:30.235 "is_configured": true, 00:09:30.235 "data_offset": 2048, 00:09:30.235 "data_size": 63488 00:09:30.235 }, 00:09:30.235 { 00:09:30.235 "name": "BaseBdev2", 00:09:30.235 "uuid": "6545bc32-1938-5ade-9347-5d96f358602f", 00:09:30.235 "is_configured": true, 00:09:30.235 "data_offset": 2048, 00:09:30.235 "data_size": 63488 00:09:30.235 }, 00:09:30.235 { 00:09:30.235 "name": "BaseBdev3", 00:09:30.235 "uuid": "4b5d8eed-9e84-5c85-8366-e118093bdb89", 00:09:30.235 "is_configured": true, 00:09:30.235 "data_offset": 2048, 00:09:30.235 "data_size": 63488 00:09:30.235 } 00:09:30.235 ] 00:09:30.235 }' 00:09:30.235 08:46:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.235 08:46:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.805 08:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:30.805 08:46:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:30.805 [2024-10-05 08:46:07.149245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.745 "name": "raid_bdev1", 00:09:31.745 "uuid": "269a2a56-e0e1-4da2-883f-dbb538a5c5be", 00:09:31.745 "strip_size_kb": 64, 00:09:31.745 "state": "online", 00:09:31.745 "raid_level": "concat", 00:09:31.745 "superblock": true, 00:09:31.745 "num_base_bdevs": 3, 00:09:31.745 "num_base_bdevs_discovered": 3, 00:09:31.745 "num_base_bdevs_operational": 3, 00:09:31.745 "base_bdevs_list": [ 00:09:31.745 { 00:09:31.745 "name": "BaseBdev1", 00:09:31.745 "uuid": "493abe35-d21d-56fa-8c65-a281b6579e58", 00:09:31.745 "is_configured": true, 00:09:31.745 "data_offset": 2048, 00:09:31.745 "data_size": 63488 00:09:31.745 }, 00:09:31.745 { 00:09:31.745 "name": "BaseBdev2", 00:09:31.745 "uuid": "6545bc32-1938-5ade-9347-5d96f358602f", 00:09:31.745 "is_configured": true, 00:09:31.745 "data_offset": 2048, 00:09:31.745 "data_size": 63488 00:09:31.745 }, 00:09:31.745 { 00:09:31.745 "name": "BaseBdev3", 00:09:31.745 "uuid": "4b5d8eed-9e84-5c85-8366-e118093bdb89", 00:09:31.745 "is_configured": true, 00:09:31.745 "data_offset": 2048, 00:09:31.745 "data_size": 63488 00:09:31.745 } 00:09:31.745 ] 00:09:31.745 }' 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.745 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.314 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:32.314 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.314 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.314 [2024-10-05 08:46:08.501675] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:32.314 [2024-10-05 08:46:08.501717] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.314 [2024-10-05 08:46:08.504243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.314 [2024-10-05 08:46:08.504292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.314 [2024-10-05 08:46:08.504331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.314 [2024-10-05 08:46:08.504340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:32.314 { 00:09:32.314 "results": [ 00:09:32.314 { 00:09:32.314 "job": "raid_bdev1", 00:09:32.314 "core_mask": "0x1", 00:09:32.314 "workload": "randrw", 00:09:32.314 "percentage": 50, 00:09:32.314 "status": "finished", 00:09:32.314 "queue_depth": 1, 00:09:32.314 "io_size": 131072, 00:09:32.314 "runtime": 1.35288, 00:09:32.314 "iops": 14253.296670805985, 00:09:32.314 "mibps": 1781.662083850748, 00:09:32.314 "io_failed": 1, 00:09:32.314 "io_timeout": 0, 00:09:32.314 "avg_latency_us": 99.01360731660702, 00:09:32.314 "min_latency_us": 25.823580786026202, 00:09:32.314 "max_latency_us": 1352.216593886463 00:09:32.314 } 00:09:32.315 ], 00:09:32.315 "core_count": 1 00:09:32.315 } 00:09:32.315 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.315 08:46:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66235 00:09:32.315 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 66235 ']' 00:09:32.315 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 66235 00:09:32.315 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:32.315 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.315 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66235 00:09:32.315 killing process with pid 66235 00:09:32.315 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.315 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.315 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66235' 00:09:32.315 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 66235 00:09:32.315 [2024-10-05 08:46:08.550650] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.315 08:46:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 66235 00:09:32.574 [2024-10-05 08:46:08.790631] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.955 08:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xV4nrEwCrQ 00:09:33.955 08:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:33.955 08:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:33.955 08:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:33.955 08:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:33.955 08:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.955 08:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:33.955 08:46:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:33.955 00:09:33.955 real 0m4.766s 00:09:33.955 user 0m5.411s 00:09:33.955 sys 0m0.735s 00:09:33.955 ************************************ 00:09:33.955 END TEST raid_read_error_test 00:09:33.955 ************************************ 00:09:33.955 08:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.955 08:46:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.955 08:46:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:33.955 08:46:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:33.955 08:46:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.955 08:46:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.955 ************************************ 00:09:33.955 START TEST raid_write_error_test 00:09:33.955 ************************************ 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xsEaojZkgB 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66356 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66356 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 66356 ']' 00:09:33.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.955 08:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.956 08:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.956 08:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.956 08:46:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.956 [2024-10-05 08:46:10.389056] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:33.956 [2024-10-05 08:46:10.389195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66356 ] 00:09:34.216 [2024-10-05 08:46:10.560718] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.476 [2024-10-05 08:46:10.806646] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.736 [2024-10-05 08:46:11.031321] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.736 [2024-10-05 08:46:11.031359] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.996 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.996 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:34.996 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.996 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:34.996 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.997 BaseBdev1_malloc 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.997 true 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.997 [2024-10-05 08:46:11.286657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:34.997 [2024-10-05 08:46:11.286798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.997 [2024-10-05 08:46:11.286833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:34.997 [2024-10-05 08:46:11.286864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.997 [2024-10-05 08:46:11.289270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.997 [2024-10-05 08:46:11.289350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:34.997 BaseBdev1 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.997 BaseBdev2_malloc 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.997 true 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.997 [2024-10-05 08:46:11.367521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:34.997 [2024-10-05 08:46:11.367578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.997 [2024-10-05 08:46:11.367593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:34.997 [2024-10-05 08:46:11.367605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.997 [2024-10-05 08:46:11.369860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.997 [2024-10-05 08:46:11.369899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:34.997 BaseBdev2 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.997 BaseBdev3_malloc 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.997 true 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.997 [2024-10-05 08:46:11.438897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:34.997 [2024-10-05 08:46:11.439020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.997 [2024-10-05 08:46:11.439055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:34.997 [2024-10-05 08:46:11.439086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.997 [2024-10-05 08:46:11.441449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.997 [2024-10-05 08:46:11.441523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:34.997 BaseBdev3 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.997 [2024-10-05 08:46:11.450971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.997 [2024-10-05 08:46:11.452940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.997 [2024-10-05 08:46:11.453065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.997 [2024-10-05 08:46:11.453285] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:34.997 [2024-10-05 08:46:11.453334] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.997 [2024-10-05 08:46:11.453587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:34.997 [2024-10-05 08:46:11.453777] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:34.997 [2024-10-05 08:46:11.453816] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:34.997 [2024-10-05 08:46:11.454000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.997 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.257 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.257 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.257 "name": "raid_bdev1", 00:09:35.257 "uuid": "5bc87bf2-c666-4695-b27c-9c5829525f17", 00:09:35.257 "strip_size_kb": 64, 00:09:35.257 "state": "online", 00:09:35.257 "raid_level": "concat", 00:09:35.257 "superblock": true, 00:09:35.257 "num_base_bdevs": 3, 00:09:35.257 "num_base_bdevs_discovered": 3, 00:09:35.257 "num_base_bdevs_operational": 3, 00:09:35.257 "base_bdevs_list": [ 00:09:35.257 { 00:09:35.257 "name": "BaseBdev1", 00:09:35.257 "uuid": "5082244d-c4f7-529b-9c1e-c28eaf3aa6d7", 00:09:35.257 "is_configured": true, 00:09:35.257 "data_offset": 2048, 00:09:35.257 "data_size": 63488 00:09:35.257 }, 00:09:35.257 { 00:09:35.257 "name": "BaseBdev2", 00:09:35.257 "uuid": "fbe3352c-3d29-5bcf-ad4f-f2d600b09394", 00:09:35.257 "is_configured": true, 00:09:35.257 "data_offset": 2048, 00:09:35.257 "data_size": 63488 00:09:35.257 }, 00:09:35.257 { 00:09:35.257 "name": "BaseBdev3", 00:09:35.257 "uuid": "3218aacc-bf1d-585a-8038-182af06a4ecb", 00:09:35.257 "is_configured": true, 00:09:35.257 "data_offset": 2048, 00:09:35.257 "data_size": 63488 00:09:35.257 } 00:09:35.257 ] 00:09:35.257 }' 00:09:35.257 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.257 08:46:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.517 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:35.517 08:46:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:35.777 [2024-10-05 08:46:12.011330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.716 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.717 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.717 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.717 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.717 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.717 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.717 08:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.717 08:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.717 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:36.717 08:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.717 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.717 "name": "raid_bdev1", 00:09:36.717 "uuid": "5bc87bf2-c666-4695-b27c-9c5829525f17", 00:09:36.717 "strip_size_kb": 64, 00:09:36.717 "state": "online", 00:09:36.717 "raid_level": "concat", 00:09:36.717 "superblock": true, 00:09:36.717 "num_base_bdevs": 3, 00:09:36.717 "num_base_bdevs_discovered": 3, 00:09:36.717 "num_base_bdevs_operational": 3, 00:09:36.717 "base_bdevs_list": [ 00:09:36.717 { 00:09:36.717 "name": "BaseBdev1", 00:09:36.717 "uuid": "5082244d-c4f7-529b-9c1e-c28eaf3aa6d7", 00:09:36.717 "is_configured": true, 00:09:36.717 "data_offset": 2048, 00:09:36.717 "data_size": 63488 00:09:36.717 }, 00:09:36.717 { 00:09:36.717 "name": "BaseBdev2", 00:09:36.717 "uuid": "fbe3352c-3d29-5bcf-ad4f-f2d600b09394", 00:09:36.717 "is_configured": true, 00:09:36.717 "data_offset": 2048, 00:09:36.717 "data_size": 63488 00:09:36.717 }, 00:09:36.717 { 00:09:36.717 "name": "BaseBdev3", 00:09:36.717 "uuid": "3218aacc-bf1d-585a-8038-182af06a4ecb", 00:09:36.717 "is_configured": true, 00:09:36.717 "data_offset": 2048, 00:09:36.717 "data_size": 63488 00:09:36.717 } 00:09:36.717 ] 00:09:36.717 }' 00:09:36.717 08:46:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.717 08:46:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.980 [2024-10-05 08:46:13.327593] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.980 [2024-10-05 08:46:13.327715] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.980 [2024-10-05 08:46:13.330362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.980 [2024-10-05 08:46:13.330456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.980 [2024-10-05 08:46:13.330518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.980 [2024-10-05 08:46:13.330557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.980 { 00:09:36.980 "results": [ 00:09:36.980 { 00:09:36.980 "job": "raid_bdev1", 00:09:36.980 "core_mask": "0x1", 00:09:36.980 "workload": "randrw", 00:09:36.980 "percentage": 50, 00:09:36.980 "status": "finished", 00:09:36.980 "queue_depth": 1, 00:09:36.980 "io_size": 131072, 00:09:36.980 "runtime": 1.316769, 00:09:36.980 "iops": 14188.517500032276, 00:09:36.980 "mibps": 1773.5646875040345, 00:09:36.980 "io_failed": 1, 00:09:36.980 "io_timeout": 0, 00:09:36.980 "avg_latency_us": 99.18895498471943, 00:09:36.980 "min_latency_us": 25.9353711790393, 00:09:36.980 "max_latency_us": 1359.3711790393013 00:09:36.980 } 00:09:36.980 ], 00:09:36.980 "core_count": 1 00:09:36.980 } 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66356 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 66356 ']' 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 66356 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66356 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66356' 00:09:36.980 killing process with pid 66356 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 66356 00:09:36.980 [2024-10-05 08:46:13.375656] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.980 08:46:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 66356 00:09:37.244 [2024-10-05 08:46:13.623523] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.623 08:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xsEaojZkgB 00:09:38.623 08:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:38.623 08:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:38.623 08:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:38.623 08:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:38.623 08:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:38.623 08:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:38.623 08:46:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:38.623 00:09:38.623 real 0m4.762s 00:09:38.623 user 0m5.451s 00:09:38.623 sys 0m0.692s 00:09:38.623 08:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.623 ************************************ 00:09:38.623 END TEST raid_write_error_test 00:09:38.623 ************************************ 00:09:38.623 08:46:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.623 08:46:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:38.883 08:46:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:38.883 08:46:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:38.883 08:46:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.883 08:46:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.883 ************************************ 00:09:38.883 START TEST raid_state_function_test 00:09:38.883 ************************************ 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66470 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66470' 00:09:38.883 Process raid pid: 66470 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66470 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 66470 ']' 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.883 08:46:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.883 [2024-10-05 08:46:15.208912] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:38.883 [2024-10-05 08:46:15.209148] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.143 [2024-10-05 08:46:15.375403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.402 [2024-10-05 08:46:15.627347] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.402 [2024-10-05 08:46:15.866431] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.402 [2024-10-05 08:46:15.866467] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.662 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.662 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.663 [2024-10-05 08:46:16.042695] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.663 [2024-10-05 08:46:16.042756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.663 [2024-10-05 08:46:16.042767] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.663 [2024-10-05 08:46:16.042779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.663 [2024-10-05 08:46:16.042788] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.663 [2024-10-05 08:46:16.042797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.663 "name": "Existed_Raid", 00:09:39.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.663 "strip_size_kb": 0, 00:09:39.663 "state": "configuring", 00:09:39.663 "raid_level": "raid1", 00:09:39.663 "superblock": false, 00:09:39.663 "num_base_bdevs": 3, 00:09:39.663 "num_base_bdevs_discovered": 0, 00:09:39.663 "num_base_bdevs_operational": 3, 00:09:39.663 "base_bdevs_list": [ 00:09:39.663 { 00:09:39.663 "name": "BaseBdev1", 00:09:39.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.663 "is_configured": false, 00:09:39.663 "data_offset": 0, 00:09:39.663 "data_size": 0 00:09:39.663 }, 00:09:39.663 { 00:09:39.663 "name": "BaseBdev2", 00:09:39.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.663 "is_configured": false, 00:09:39.663 "data_offset": 0, 00:09:39.663 "data_size": 0 00:09:39.663 }, 00:09:39.663 { 00:09:39.663 "name": "BaseBdev3", 00:09:39.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.663 "is_configured": false, 00:09:39.663 "data_offset": 0, 00:09:39.663 "data_size": 0 00:09:39.663 } 00:09:39.663 ] 00:09:39.663 }' 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.663 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.233 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.233 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.233 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.233 [2024-10-05 08:46:16.453968] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.233 [2024-10-05 08:46:16.454103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:40.233 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.233 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:40.233 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.233 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.233 [2024-10-05 08:46:16.461924] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.233 [2024-10-05 08:46:16.462029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.233 [2024-10-05 08:46:16.462057] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.233 [2024-10-05 08:46:16.462080] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.233 [2024-10-05 08:46:16.462097] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.233 [2024-10-05 08:46:16.462118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.233 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.233 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.233 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.233 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.234 [2024-10-05 08:46:16.541389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.234 BaseBdev1 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.234 [ 00:09:40.234 { 00:09:40.234 "name": "BaseBdev1", 00:09:40.234 "aliases": [ 00:09:40.234 "f83090c1-bf25-480d-943e-82f28b9d323b" 00:09:40.234 ], 00:09:40.234 "product_name": "Malloc disk", 00:09:40.234 "block_size": 512, 00:09:40.234 "num_blocks": 65536, 00:09:40.234 "uuid": "f83090c1-bf25-480d-943e-82f28b9d323b", 00:09:40.234 "assigned_rate_limits": { 00:09:40.234 "rw_ios_per_sec": 0, 00:09:40.234 "rw_mbytes_per_sec": 0, 00:09:40.234 "r_mbytes_per_sec": 0, 00:09:40.234 "w_mbytes_per_sec": 0 00:09:40.234 }, 00:09:40.234 "claimed": true, 00:09:40.234 "claim_type": "exclusive_write", 00:09:40.234 "zoned": false, 00:09:40.234 "supported_io_types": { 00:09:40.234 "read": true, 00:09:40.234 "write": true, 00:09:40.234 "unmap": true, 00:09:40.234 "flush": true, 00:09:40.234 "reset": true, 00:09:40.234 "nvme_admin": false, 00:09:40.234 "nvme_io": false, 00:09:40.234 "nvme_io_md": false, 00:09:40.234 "write_zeroes": true, 00:09:40.234 "zcopy": true, 00:09:40.234 "get_zone_info": false, 00:09:40.234 "zone_management": false, 00:09:40.234 "zone_append": false, 00:09:40.234 "compare": false, 00:09:40.234 "compare_and_write": false, 00:09:40.234 "abort": true, 00:09:40.234 "seek_hole": false, 00:09:40.234 "seek_data": false, 00:09:40.234 "copy": true, 00:09:40.234 "nvme_iov_md": false 00:09:40.234 }, 00:09:40.234 "memory_domains": [ 00:09:40.234 { 00:09:40.234 "dma_device_id": "system", 00:09:40.234 "dma_device_type": 1 00:09:40.234 }, 00:09:40.234 { 00:09:40.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.234 "dma_device_type": 2 00:09:40.234 } 00:09:40.234 ], 00:09:40.234 "driver_specific": {} 00:09:40.234 } 00:09:40.234 ] 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.234 "name": "Existed_Raid", 00:09:40.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.234 "strip_size_kb": 0, 00:09:40.234 "state": "configuring", 00:09:40.234 "raid_level": "raid1", 00:09:40.234 "superblock": false, 00:09:40.234 "num_base_bdevs": 3, 00:09:40.234 "num_base_bdevs_discovered": 1, 00:09:40.234 "num_base_bdevs_operational": 3, 00:09:40.234 "base_bdevs_list": [ 00:09:40.234 { 00:09:40.234 "name": "BaseBdev1", 00:09:40.234 "uuid": "f83090c1-bf25-480d-943e-82f28b9d323b", 00:09:40.234 "is_configured": true, 00:09:40.234 "data_offset": 0, 00:09:40.234 "data_size": 65536 00:09:40.234 }, 00:09:40.234 { 00:09:40.234 "name": "BaseBdev2", 00:09:40.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.234 "is_configured": false, 00:09:40.234 "data_offset": 0, 00:09:40.234 "data_size": 0 00:09:40.234 }, 00:09:40.234 { 00:09:40.234 "name": "BaseBdev3", 00:09:40.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.234 "is_configured": false, 00:09:40.234 "data_offset": 0, 00:09:40.234 "data_size": 0 00:09:40.234 } 00:09:40.234 ] 00:09:40.234 }' 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.234 08:46:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.804 [2024-10-05 08:46:17.012689] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.804 [2024-10-05 08:46:17.012809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.804 [2024-10-05 08:46:17.024694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.804 [2024-10-05 08:46:17.026863] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.804 [2024-10-05 08:46:17.026941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.804 [2024-10-05 08:46:17.026985] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.804 [2024-10-05 08:46:17.027013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.804 "name": "Existed_Raid", 00:09:40.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.804 "strip_size_kb": 0, 00:09:40.804 "state": "configuring", 00:09:40.804 "raid_level": "raid1", 00:09:40.804 "superblock": false, 00:09:40.804 "num_base_bdevs": 3, 00:09:40.804 "num_base_bdevs_discovered": 1, 00:09:40.804 "num_base_bdevs_operational": 3, 00:09:40.804 "base_bdevs_list": [ 00:09:40.804 { 00:09:40.804 "name": "BaseBdev1", 00:09:40.804 "uuid": "f83090c1-bf25-480d-943e-82f28b9d323b", 00:09:40.804 "is_configured": true, 00:09:40.804 "data_offset": 0, 00:09:40.804 "data_size": 65536 00:09:40.804 }, 00:09:40.804 { 00:09:40.804 "name": "BaseBdev2", 00:09:40.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.804 "is_configured": false, 00:09:40.804 "data_offset": 0, 00:09:40.804 "data_size": 0 00:09:40.804 }, 00:09:40.804 { 00:09:40.804 "name": "BaseBdev3", 00:09:40.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.804 "is_configured": false, 00:09:40.804 "data_offset": 0, 00:09:40.804 "data_size": 0 00:09:40.804 } 00:09:40.804 ] 00:09:40.804 }' 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.804 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.064 [2024-10-05 08:46:17.524485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.064 BaseBdev2 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.064 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.323 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.324 [ 00:09:41.324 { 00:09:41.324 "name": "BaseBdev2", 00:09:41.324 "aliases": [ 00:09:41.324 "d616dd53-e900-46a7-886e-b01973f5405c" 00:09:41.324 ], 00:09:41.324 "product_name": "Malloc disk", 00:09:41.324 "block_size": 512, 00:09:41.324 "num_blocks": 65536, 00:09:41.324 "uuid": "d616dd53-e900-46a7-886e-b01973f5405c", 00:09:41.324 "assigned_rate_limits": { 00:09:41.324 "rw_ios_per_sec": 0, 00:09:41.324 "rw_mbytes_per_sec": 0, 00:09:41.324 "r_mbytes_per_sec": 0, 00:09:41.324 "w_mbytes_per_sec": 0 00:09:41.324 }, 00:09:41.324 "claimed": true, 00:09:41.324 "claim_type": "exclusive_write", 00:09:41.324 "zoned": false, 00:09:41.324 "supported_io_types": { 00:09:41.324 "read": true, 00:09:41.324 "write": true, 00:09:41.324 "unmap": true, 00:09:41.324 "flush": true, 00:09:41.324 "reset": true, 00:09:41.324 "nvme_admin": false, 00:09:41.324 "nvme_io": false, 00:09:41.324 "nvme_io_md": false, 00:09:41.324 "write_zeroes": true, 00:09:41.324 "zcopy": true, 00:09:41.324 "get_zone_info": false, 00:09:41.324 "zone_management": false, 00:09:41.324 "zone_append": false, 00:09:41.324 "compare": false, 00:09:41.324 "compare_and_write": false, 00:09:41.324 "abort": true, 00:09:41.324 "seek_hole": false, 00:09:41.324 "seek_data": false, 00:09:41.324 "copy": true, 00:09:41.324 "nvme_iov_md": false 00:09:41.324 }, 00:09:41.324 "memory_domains": [ 00:09:41.324 { 00:09:41.324 "dma_device_id": "system", 00:09:41.324 "dma_device_type": 1 00:09:41.324 }, 00:09:41.324 { 00:09:41.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.324 "dma_device_type": 2 00:09:41.324 } 00:09:41.324 ], 00:09:41.324 "driver_specific": {} 00:09:41.324 } 00:09:41.324 ] 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.324 "name": "Existed_Raid", 00:09:41.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.324 "strip_size_kb": 0, 00:09:41.324 "state": "configuring", 00:09:41.324 "raid_level": "raid1", 00:09:41.324 "superblock": false, 00:09:41.324 "num_base_bdevs": 3, 00:09:41.324 "num_base_bdevs_discovered": 2, 00:09:41.324 "num_base_bdevs_operational": 3, 00:09:41.324 "base_bdevs_list": [ 00:09:41.324 { 00:09:41.324 "name": "BaseBdev1", 00:09:41.324 "uuid": "f83090c1-bf25-480d-943e-82f28b9d323b", 00:09:41.324 "is_configured": true, 00:09:41.324 "data_offset": 0, 00:09:41.324 "data_size": 65536 00:09:41.324 }, 00:09:41.324 { 00:09:41.324 "name": "BaseBdev2", 00:09:41.324 "uuid": "d616dd53-e900-46a7-886e-b01973f5405c", 00:09:41.324 "is_configured": true, 00:09:41.324 "data_offset": 0, 00:09:41.324 "data_size": 65536 00:09:41.324 }, 00:09:41.324 { 00:09:41.324 "name": "BaseBdev3", 00:09:41.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.324 "is_configured": false, 00:09:41.324 "data_offset": 0, 00:09:41.324 "data_size": 0 00:09:41.324 } 00:09:41.324 ] 00:09:41.324 }' 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.324 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.584 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.584 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.584 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.585 [2024-10-05 08:46:17.956537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.585 [2024-10-05 08:46:17.956588] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:41.585 [2024-10-05 08:46:17.956608] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:41.585 [2024-10-05 08:46:17.956917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:41.585 [2024-10-05 08:46:17.957232] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:41.585 [2024-10-05 08:46:17.957278] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:41.585 [2024-10-05 08:46:17.957580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.585 BaseBdev3 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.585 [ 00:09:41.585 { 00:09:41.585 "name": "BaseBdev3", 00:09:41.585 "aliases": [ 00:09:41.585 "82a1935c-7575-4fd5-9dd0-e846fb0a2985" 00:09:41.585 ], 00:09:41.585 "product_name": "Malloc disk", 00:09:41.585 "block_size": 512, 00:09:41.585 "num_blocks": 65536, 00:09:41.585 "uuid": "82a1935c-7575-4fd5-9dd0-e846fb0a2985", 00:09:41.585 "assigned_rate_limits": { 00:09:41.585 "rw_ios_per_sec": 0, 00:09:41.585 "rw_mbytes_per_sec": 0, 00:09:41.585 "r_mbytes_per_sec": 0, 00:09:41.585 "w_mbytes_per_sec": 0 00:09:41.585 }, 00:09:41.585 "claimed": true, 00:09:41.585 "claim_type": "exclusive_write", 00:09:41.585 "zoned": false, 00:09:41.585 "supported_io_types": { 00:09:41.585 "read": true, 00:09:41.585 "write": true, 00:09:41.585 "unmap": true, 00:09:41.585 "flush": true, 00:09:41.585 "reset": true, 00:09:41.585 "nvme_admin": false, 00:09:41.585 "nvme_io": false, 00:09:41.585 "nvme_io_md": false, 00:09:41.585 "write_zeroes": true, 00:09:41.585 "zcopy": true, 00:09:41.585 "get_zone_info": false, 00:09:41.585 "zone_management": false, 00:09:41.585 "zone_append": false, 00:09:41.585 "compare": false, 00:09:41.585 "compare_and_write": false, 00:09:41.585 "abort": true, 00:09:41.585 "seek_hole": false, 00:09:41.585 "seek_data": false, 00:09:41.585 "copy": true, 00:09:41.585 "nvme_iov_md": false 00:09:41.585 }, 00:09:41.585 "memory_domains": [ 00:09:41.585 { 00:09:41.585 "dma_device_id": "system", 00:09:41.585 "dma_device_type": 1 00:09:41.585 }, 00:09:41.585 { 00:09:41.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.585 "dma_device_type": 2 00:09:41.585 } 00:09:41.585 ], 00:09:41.585 "driver_specific": {} 00:09:41.585 } 00:09:41.585 ] 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.585 08:46:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.585 "name": "Existed_Raid", 00:09:41.585 "uuid": "7add30d5-4a7b-4151-85c4-f35c8e2048c5", 00:09:41.585 "strip_size_kb": 0, 00:09:41.585 "state": "online", 00:09:41.585 "raid_level": "raid1", 00:09:41.585 "superblock": false, 00:09:41.585 "num_base_bdevs": 3, 00:09:41.585 "num_base_bdevs_discovered": 3, 00:09:41.585 "num_base_bdevs_operational": 3, 00:09:41.585 "base_bdevs_list": [ 00:09:41.585 { 00:09:41.585 "name": "BaseBdev1", 00:09:41.585 "uuid": "f83090c1-bf25-480d-943e-82f28b9d323b", 00:09:41.585 "is_configured": true, 00:09:41.585 "data_offset": 0, 00:09:41.585 "data_size": 65536 00:09:41.585 }, 00:09:41.585 { 00:09:41.585 "name": "BaseBdev2", 00:09:41.585 "uuid": "d616dd53-e900-46a7-886e-b01973f5405c", 00:09:41.585 "is_configured": true, 00:09:41.585 "data_offset": 0, 00:09:41.585 "data_size": 65536 00:09:41.585 }, 00:09:41.585 { 00:09:41.585 "name": "BaseBdev3", 00:09:41.585 "uuid": "82a1935c-7575-4fd5-9dd0-e846fb0a2985", 00:09:41.585 "is_configured": true, 00:09:41.585 "data_offset": 0, 00:09:41.585 "data_size": 65536 00:09:41.585 } 00:09:41.585 ] 00:09:41.585 }' 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.585 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.155 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.155 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.156 [2024-10-05 08:46:18.416099] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.156 "name": "Existed_Raid", 00:09:42.156 "aliases": [ 00:09:42.156 "7add30d5-4a7b-4151-85c4-f35c8e2048c5" 00:09:42.156 ], 00:09:42.156 "product_name": "Raid Volume", 00:09:42.156 "block_size": 512, 00:09:42.156 "num_blocks": 65536, 00:09:42.156 "uuid": "7add30d5-4a7b-4151-85c4-f35c8e2048c5", 00:09:42.156 "assigned_rate_limits": { 00:09:42.156 "rw_ios_per_sec": 0, 00:09:42.156 "rw_mbytes_per_sec": 0, 00:09:42.156 "r_mbytes_per_sec": 0, 00:09:42.156 "w_mbytes_per_sec": 0 00:09:42.156 }, 00:09:42.156 "claimed": false, 00:09:42.156 "zoned": false, 00:09:42.156 "supported_io_types": { 00:09:42.156 "read": true, 00:09:42.156 "write": true, 00:09:42.156 "unmap": false, 00:09:42.156 "flush": false, 00:09:42.156 "reset": true, 00:09:42.156 "nvme_admin": false, 00:09:42.156 "nvme_io": false, 00:09:42.156 "nvme_io_md": false, 00:09:42.156 "write_zeroes": true, 00:09:42.156 "zcopy": false, 00:09:42.156 "get_zone_info": false, 00:09:42.156 "zone_management": false, 00:09:42.156 "zone_append": false, 00:09:42.156 "compare": false, 00:09:42.156 "compare_and_write": false, 00:09:42.156 "abort": false, 00:09:42.156 "seek_hole": false, 00:09:42.156 "seek_data": false, 00:09:42.156 "copy": false, 00:09:42.156 "nvme_iov_md": false 00:09:42.156 }, 00:09:42.156 "memory_domains": [ 00:09:42.156 { 00:09:42.156 "dma_device_id": "system", 00:09:42.156 "dma_device_type": 1 00:09:42.156 }, 00:09:42.156 { 00:09:42.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.156 "dma_device_type": 2 00:09:42.156 }, 00:09:42.156 { 00:09:42.156 "dma_device_id": "system", 00:09:42.156 "dma_device_type": 1 00:09:42.156 }, 00:09:42.156 { 00:09:42.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.156 "dma_device_type": 2 00:09:42.156 }, 00:09:42.156 { 00:09:42.156 "dma_device_id": "system", 00:09:42.156 "dma_device_type": 1 00:09:42.156 }, 00:09:42.156 { 00:09:42.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.156 "dma_device_type": 2 00:09:42.156 } 00:09:42.156 ], 00:09:42.156 "driver_specific": { 00:09:42.156 "raid": { 00:09:42.156 "uuid": "7add30d5-4a7b-4151-85c4-f35c8e2048c5", 00:09:42.156 "strip_size_kb": 0, 00:09:42.156 "state": "online", 00:09:42.156 "raid_level": "raid1", 00:09:42.156 "superblock": false, 00:09:42.156 "num_base_bdevs": 3, 00:09:42.156 "num_base_bdevs_discovered": 3, 00:09:42.156 "num_base_bdevs_operational": 3, 00:09:42.156 "base_bdevs_list": [ 00:09:42.156 { 00:09:42.156 "name": "BaseBdev1", 00:09:42.156 "uuid": "f83090c1-bf25-480d-943e-82f28b9d323b", 00:09:42.156 "is_configured": true, 00:09:42.156 "data_offset": 0, 00:09:42.156 "data_size": 65536 00:09:42.156 }, 00:09:42.156 { 00:09:42.156 "name": "BaseBdev2", 00:09:42.156 "uuid": "d616dd53-e900-46a7-886e-b01973f5405c", 00:09:42.156 "is_configured": true, 00:09:42.156 "data_offset": 0, 00:09:42.156 "data_size": 65536 00:09:42.156 }, 00:09:42.156 { 00:09:42.156 "name": "BaseBdev3", 00:09:42.156 "uuid": "82a1935c-7575-4fd5-9dd0-e846fb0a2985", 00:09:42.156 "is_configured": true, 00:09:42.156 "data_offset": 0, 00:09:42.156 "data_size": 65536 00:09:42.156 } 00:09:42.156 ] 00:09:42.156 } 00:09:42.156 } 00:09:42.156 }' 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:42.156 BaseBdev2 00:09:42.156 BaseBdev3' 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.156 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.417 [2024-10-05 08:46:18.655366] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.417 "name": "Existed_Raid", 00:09:42.417 "uuid": "7add30d5-4a7b-4151-85c4-f35c8e2048c5", 00:09:42.417 "strip_size_kb": 0, 00:09:42.417 "state": "online", 00:09:42.417 "raid_level": "raid1", 00:09:42.417 "superblock": false, 00:09:42.417 "num_base_bdevs": 3, 00:09:42.417 "num_base_bdevs_discovered": 2, 00:09:42.417 "num_base_bdevs_operational": 2, 00:09:42.417 "base_bdevs_list": [ 00:09:42.417 { 00:09:42.417 "name": null, 00:09:42.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.417 "is_configured": false, 00:09:42.417 "data_offset": 0, 00:09:42.417 "data_size": 65536 00:09:42.417 }, 00:09:42.417 { 00:09:42.417 "name": "BaseBdev2", 00:09:42.417 "uuid": "d616dd53-e900-46a7-886e-b01973f5405c", 00:09:42.417 "is_configured": true, 00:09:42.417 "data_offset": 0, 00:09:42.417 "data_size": 65536 00:09:42.417 }, 00:09:42.417 { 00:09:42.417 "name": "BaseBdev3", 00:09:42.417 "uuid": "82a1935c-7575-4fd5-9dd0-e846fb0a2985", 00:09:42.417 "is_configured": true, 00:09:42.417 "data_offset": 0, 00:09:42.417 "data_size": 65536 00:09:42.417 } 00:09:42.417 ] 00:09:42.417 }' 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.417 08:46:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.984 [2024-10-05 08:46:19.262606] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.984 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.985 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.985 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.985 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.985 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.985 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.985 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.985 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:42.985 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.985 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.985 [2024-10-05 08:46:19.425730] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:42.985 [2024-10-05 08:46:19.425844] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.244 [2024-10-05 08:46:19.526894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.244 [2024-10-05 08:46:19.527061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.244 [2024-10-05 08:46:19.527113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.244 BaseBdev2 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.244 [ 00:09:43.244 { 00:09:43.244 "name": "BaseBdev2", 00:09:43.244 "aliases": [ 00:09:43.244 "e2a4f13a-61b6-4feb-a1ac-0946bb89c9da" 00:09:43.244 ], 00:09:43.244 "product_name": "Malloc disk", 00:09:43.244 "block_size": 512, 00:09:43.244 "num_blocks": 65536, 00:09:43.244 "uuid": "e2a4f13a-61b6-4feb-a1ac-0946bb89c9da", 00:09:43.244 "assigned_rate_limits": { 00:09:43.244 "rw_ios_per_sec": 0, 00:09:43.244 "rw_mbytes_per_sec": 0, 00:09:43.244 "r_mbytes_per_sec": 0, 00:09:43.244 "w_mbytes_per_sec": 0 00:09:43.244 }, 00:09:43.244 "claimed": false, 00:09:43.244 "zoned": false, 00:09:43.244 "supported_io_types": { 00:09:43.244 "read": true, 00:09:43.244 "write": true, 00:09:43.244 "unmap": true, 00:09:43.244 "flush": true, 00:09:43.244 "reset": true, 00:09:43.244 "nvme_admin": false, 00:09:43.244 "nvme_io": false, 00:09:43.244 "nvme_io_md": false, 00:09:43.244 "write_zeroes": true, 00:09:43.244 "zcopy": true, 00:09:43.244 "get_zone_info": false, 00:09:43.244 "zone_management": false, 00:09:43.244 "zone_append": false, 00:09:43.244 "compare": false, 00:09:43.244 "compare_and_write": false, 00:09:43.244 "abort": true, 00:09:43.244 "seek_hole": false, 00:09:43.244 "seek_data": false, 00:09:43.244 "copy": true, 00:09:43.244 "nvme_iov_md": false 00:09:43.244 }, 00:09:43.244 "memory_domains": [ 00:09:43.244 { 00:09:43.244 "dma_device_id": "system", 00:09:43.244 "dma_device_type": 1 00:09:43.244 }, 00:09:43.244 { 00:09:43.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.244 "dma_device_type": 2 00:09:43.244 } 00:09:43.244 ], 00:09:43.244 "driver_specific": {} 00:09:43.244 } 00:09:43.244 ] 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.244 BaseBdev3 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.244 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.504 [ 00:09:43.504 { 00:09:43.504 "name": "BaseBdev3", 00:09:43.504 "aliases": [ 00:09:43.504 "ee216a36-1588-44df-a3ec-d7cb2bc438c1" 00:09:43.504 ], 00:09:43.504 "product_name": "Malloc disk", 00:09:43.504 "block_size": 512, 00:09:43.504 "num_blocks": 65536, 00:09:43.504 "uuid": "ee216a36-1588-44df-a3ec-d7cb2bc438c1", 00:09:43.504 "assigned_rate_limits": { 00:09:43.504 "rw_ios_per_sec": 0, 00:09:43.504 "rw_mbytes_per_sec": 0, 00:09:43.504 "r_mbytes_per_sec": 0, 00:09:43.504 "w_mbytes_per_sec": 0 00:09:43.504 }, 00:09:43.504 "claimed": false, 00:09:43.504 "zoned": false, 00:09:43.504 "supported_io_types": { 00:09:43.504 "read": true, 00:09:43.504 "write": true, 00:09:43.504 "unmap": true, 00:09:43.504 "flush": true, 00:09:43.504 "reset": true, 00:09:43.504 "nvme_admin": false, 00:09:43.504 "nvme_io": false, 00:09:43.504 "nvme_io_md": false, 00:09:43.504 "write_zeroes": true, 00:09:43.504 "zcopy": true, 00:09:43.504 "get_zone_info": false, 00:09:43.504 "zone_management": false, 00:09:43.504 "zone_append": false, 00:09:43.504 "compare": false, 00:09:43.504 "compare_and_write": false, 00:09:43.504 "abort": true, 00:09:43.504 "seek_hole": false, 00:09:43.504 "seek_data": false, 00:09:43.504 "copy": true, 00:09:43.504 "nvme_iov_md": false 00:09:43.504 }, 00:09:43.504 "memory_domains": [ 00:09:43.504 { 00:09:43.504 "dma_device_id": "system", 00:09:43.504 "dma_device_type": 1 00:09:43.504 }, 00:09:43.504 { 00:09:43.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.504 "dma_device_type": 2 00:09:43.504 } 00:09:43.504 ], 00:09:43.504 "driver_specific": {} 00:09:43.504 } 00:09:43.504 ] 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.504 [2024-10-05 08:46:19.744604] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.504 [2024-10-05 08:46:19.744750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.504 [2024-10-05 08:46:19.744793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.504 [2024-10-05 08:46:19.746794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.504 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.505 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.505 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.505 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.505 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.505 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.505 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.505 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.505 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.505 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.505 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.505 "name": "Existed_Raid", 00:09:43.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.505 "strip_size_kb": 0, 00:09:43.505 "state": "configuring", 00:09:43.505 "raid_level": "raid1", 00:09:43.505 "superblock": false, 00:09:43.505 "num_base_bdevs": 3, 00:09:43.505 "num_base_bdevs_discovered": 2, 00:09:43.505 "num_base_bdevs_operational": 3, 00:09:43.505 "base_bdevs_list": [ 00:09:43.505 { 00:09:43.505 "name": "BaseBdev1", 00:09:43.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.505 "is_configured": false, 00:09:43.505 "data_offset": 0, 00:09:43.505 "data_size": 0 00:09:43.505 }, 00:09:43.505 { 00:09:43.505 "name": "BaseBdev2", 00:09:43.505 "uuid": "e2a4f13a-61b6-4feb-a1ac-0946bb89c9da", 00:09:43.505 "is_configured": true, 00:09:43.505 "data_offset": 0, 00:09:43.505 "data_size": 65536 00:09:43.505 }, 00:09:43.505 { 00:09:43.505 "name": "BaseBdev3", 00:09:43.505 "uuid": "ee216a36-1588-44df-a3ec-d7cb2bc438c1", 00:09:43.505 "is_configured": true, 00:09:43.505 "data_offset": 0, 00:09:43.505 "data_size": 65536 00:09:43.505 } 00:09:43.505 ] 00:09:43.505 }' 00:09:43.505 08:46:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.505 08:46:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.766 [2024-10-05 08:46:20.171895] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.766 "name": "Existed_Raid", 00:09:43.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.766 "strip_size_kb": 0, 00:09:43.766 "state": "configuring", 00:09:43.766 "raid_level": "raid1", 00:09:43.766 "superblock": false, 00:09:43.766 "num_base_bdevs": 3, 00:09:43.766 "num_base_bdevs_discovered": 1, 00:09:43.766 "num_base_bdevs_operational": 3, 00:09:43.766 "base_bdevs_list": [ 00:09:43.766 { 00:09:43.766 "name": "BaseBdev1", 00:09:43.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.766 "is_configured": false, 00:09:43.766 "data_offset": 0, 00:09:43.766 "data_size": 0 00:09:43.766 }, 00:09:43.766 { 00:09:43.766 "name": null, 00:09:43.766 "uuid": "e2a4f13a-61b6-4feb-a1ac-0946bb89c9da", 00:09:43.766 "is_configured": false, 00:09:43.766 "data_offset": 0, 00:09:43.766 "data_size": 65536 00:09:43.766 }, 00:09:43.766 { 00:09:43.766 "name": "BaseBdev3", 00:09:43.766 "uuid": "ee216a36-1588-44df-a3ec-d7cb2bc438c1", 00:09:43.766 "is_configured": true, 00:09:43.766 "data_offset": 0, 00:09:43.766 "data_size": 65536 00:09:43.766 } 00:09:43.766 ] 00:09:43.766 }' 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.766 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.358 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.358 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:44.358 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.358 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.358 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.358 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:44.358 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.358 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.358 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.358 [2024-10-05 08:46:20.645853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.358 BaseBdev1 00:09:44.358 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.359 [ 00:09:44.359 { 00:09:44.359 "name": "BaseBdev1", 00:09:44.359 "aliases": [ 00:09:44.359 "96d10d6b-a5d8-43da-8228-edce51041a7e" 00:09:44.359 ], 00:09:44.359 "product_name": "Malloc disk", 00:09:44.359 "block_size": 512, 00:09:44.359 "num_blocks": 65536, 00:09:44.359 "uuid": "96d10d6b-a5d8-43da-8228-edce51041a7e", 00:09:44.359 "assigned_rate_limits": { 00:09:44.359 "rw_ios_per_sec": 0, 00:09:44.359 "rw_mbytes_per_sec": 0, 00:09:44.359 "r_mbytes_per_sec": 0, 00:09:44.359 "w_mbytes_per_sec": 0 00:09:44.359 }, 00:09:44.359 "claimed": true, 00:09:44.359 "claim_type": "exclusive_write", 00:09:44.359 "zoned": false, 00:09:44.359 "supported_io_types": { 00:09:44.359 "read": true, 00:09:44.359 "write": true, 00:09:44.359 "unmap": true, 00:09:44.359 "flush": true, 00:09:44.359 "reset": true, 00:09:44.359 "nvme_admin": false, 00:09:44.359 "nvme_io": false, 00:09:44.359 "nvme_io_md": false, 00:09:44.359 "write_zeroes": true, 00:09:44.359 "zcopy": true, 00:09:44.359 "get_zone_info": false, 00:09:44.359 "zone_management": false, 00:09:44.359 "zone_append": false, 00:09:44.359 "compare": false, 00:09:44.359 "compare_and_write": false, 00:09:44.359 "abort": true, 00:09:44.359 "seek_hole": false, 00:09:44.359 "seek_data": false, 00:09:44.359 "copy": true, 00:09:44.359 "nvme_iov_md": false 00:09:44.359 }, 00:09:44.359 "memory_domains": [ 00:09:44.359 { 00:09:44.359 "dma_device_id": "system", 00:09:44.359 "dma_device_type": 1 00:09:44.359 }, 00:09:44.359 { 00:09:44.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.359 "dma_device_type": 2 00:09:44.359 } 00:09:44.359 ], 00:09:44.359 "driver_specific": {} 00:09:44.359 } 00:09:44.359 ] 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.359 "name": "Existed_Raid", 00:09:44.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.359 "strip_size_kb": 0, 00:09:44.359 "state": "configuring", 00:09:44.359 "raid_level": "raid1", 00:09:44.359 "superblock": false, 00:09:44.359 "num_base_bdevs": 3, 00:09:44.359 "num_base_bdevs_discovered": 2, 00:09:44.359 "num_base_bdevs_operational": 3, 00:09:44.359 "base_bdevs_list": [ 00:09:44.359 { 00:09:44.359 "name": "BaseBdev1", 00:09:44.359 "uuid": "96d10d6b-a5d8-43da-8228-edce51041a7e", 00:09:44.359 "is_configured": true, 00:09:44.359 "data_offset": 0, 00:09:44.359 "data_size": 65536 00:09:44.359 }, 00:09:44.359 { 00:09:44.359 "name": null, 00:09:44.359 "uuid": "e2a4f13a-61b6-4feb-a1ac-0946bb89c9da", 00:09:44.359 "is_configured": false, 00:09:44.359 "data_offset": 0, 00:09:44.359 "data_size": 65536 00:09:44.359 }, 00:09:44.359 { 00:09:44.359 "name": "BaseBdev3", 00:09:44.359 "uuid": "ee216a36-1588-44df-a3ec-d7cb2bc438c1", 00:09:44.359 "is_configured": true, 00:09:44.359 "data_offset": 0, 00:09:44.359 "data_size": 65536 00:09:44.359 } 00:09:44.359 ] 00:09:44.359 }' 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.359 08:46:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.929 [2024-10-05 08:46:21.153027] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.929 "name": "Existed_Raid", 00:09:44.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.929 "strip_size_kb": 0, 00:09:44.929 "state": "configuring", 00:09:44.929 "raid_level": "raid1", 00:09:44.929 "superblock": false, 00:09:44.929 "num_base_bdevs": 3, 00:09:44.929 "num_base_bdevs_discovered": 1, 00:09:44.929 "num_base_bdevs_operational": 3, 00:09:44.929 "base_bdevs_list": [ 00:09:44.929 { 00:09:44.929 "name": "BaseBdev1", 00:09:44.929 "uuid": "96d10d6b-a5d8-43da-8228-edce51041a7e", 00:09:44.929 "is_configured": true, 00:09:44.929 "data_offset": 0, 00:09:44.929 "data_size": 65536 00:09:44.929 }, 00:09:44.929 { 00:09:44.929 "name": null, 00:09:44.929 "uuid": "e2a4f13a-61b6-4feb-a1ac-0946bb89c9da", 00:09:44.929 "is_configured": false, 00:09:44.929 "data_offset": 0, 00:09:44.929 "data_size": 65536 00:09:44.929 }, 00:09:44.929 { 00:09:44.929 "name": null, 00:09:44.929 "uuid": "ee216a36-1588-44df-a3ec-d7cb2bc438c1", 00:09:44.929 "is_configured": false, 00:09:44.929 "data_offset": 0, 00:09:44.929 "data_size": 65536 00:09:44.929 } 00:09:44.929 ] 00:09:44.929 }' 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.929 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.189 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.190 [2024-10-05 08:46:21.600264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.190 "name": "Existed_Raid", 00:09:45.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.190 "strip_size_kb": 0, 00:09:45.190 "state": "configuring", 00:09:45.190 "raid_level": "raid1", 00:09:45.190 "superblock": false, 00:09:45.190 "num_base_bdevs": 3, 00:09:45.190 "num_base_bdevs_discovered": 2, 00:09:45.190 "num_base_bdevs_operational": 3, 00:09:45.190 "base_bdevs_list": [ 00:09:45.190 { 00:09:45.190 "name": "BaseBdev1", 00:09:45.190 "uuid": "96d10d6b-a5d8-43da-8228-edce51041a7e", 00:09:45.190 "is_configured": true, 00:09:45.190 "data_offset": 0, 00:09:45.190 "data_size": 65536 00:09:45.190 }, 00:09:45.190 { 00:09:45.190 "name": null, 00:09:45.190 "uuid": "e2a4f13a-61b6-4feb-a1ac-0946bb89c9da", 00:09:45.190 "is_configured": false, 00:09:45.190 "data_offset": 0, 00:09:45.190 "data_size": 65536 00:09:45.190 }, 00:09:45.190 { 00:09:45.190 "name": "BaseBdev3", 00:09:45.190 "uuid": "ee216a36-1588-44df-a3ec-d7cb2bc438c1", 00:09:45.190 "is_configured": true, 00:09:45.190 "data_offset": 0, 00:09:45.190 "data_size": 65536 00:09:45.190 } 00:09:45.190 ] 00:09:45.190 }' 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.190 08:46:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.761 [2024-10-05 08:46:22.063538] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.761 "name": "Existed_Raid", 00:09:45.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.761 "strip_size_kb": 0, 00:09:45.761 "state": "configuring", 00:09:45.761 "raid_level": "raid1", 00:09:45.761 "superblock": false, 00:09:45.761 "num_base_bdevs": 3, 00:09:45.761 "num_base_bdevs_discovered": 1, 00:09:45.761 "num_base_bdevs_operational": 3, 00:09:45.761 "base_bdevs_list": [ 00:09:45.761 { 00:09:45.761 "name": null, 00:09:45.761 "uuid": "96d10d6b-a5d8-43da-8228-edce51041a7e", 00:09:45.761 "is_configured": false, 00:09:45.761 "data_offset": 0, 00:09:45.761 "data_size": 65536 00:09:45.761 }, 00:09:45.761 { 00:09:45.761 "name": null, 00:09:45.761 "uuid": "e2a4f13a-61b6-4feb-a1ac-0946bb89c9da", 00:09:45.761 "is_configured": false, 00:09:45.761 "data_offset": 0, 00:09:45.761 "data_size": 65536 00:09:45.761 }, 00:09:45.761 { 00:09:45.761 "name": "BaseBdev3", 00:09:45.761 "uuid": "ee216a36-1588-44df-a3ec-d7cb2bc438c1", 00:09:45.761 "is_configured": true, 00:09:45.761 "data_offset": 0, 00:09:45.761 "data_size": 65536 00:09:45.761 } 00:09:45.761 ] 00:09:45.761 }' 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.761 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.331 [2024-10-05 08:46:22.589253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.331 "name": "Existed_Raid", 00:09:46.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.331 "strip_size_kb": 0, 00:09:46.331 "state": "configuring", 00:09:46.331 "raid_level": "raid1", 00:09:46.331 "superblock": false, 00:09:46.331 "num_base_bdevs": 3, 00:09:46.331 "num_base_bdevs_discovered": 2, 00:09:46.331 "num_base_bdevs_operational": 3, 00:09:46.331 "base_bdevs_list": [ 00:09:46.331 { 00:09:46.331 "name": null, 00:09:46.331 "uuid": "96d10d6b-a5d8-43da-8228-edce51041a7e", 00:09:46.331 "is_configured": false, 00:09:46.331 "data_offset": 0, 00:09:46.331 "data_size": 65536 00:09:46.331 }, 00:09:46.331 { 00:09:46.331 "name": "BaseBdev2", 00:09:46.331 "uuid": "e2a4f13a-61b6-4feb-a1ac-0946bb89c9da", 00:09:46.331 "is_configured": true, 00:09:46.331 "data_offset": 0, 00:09:46.331 "data_size": 65536 00:09:46.331 }, 00:09:46.331 { 00:09:46.331 "name": "BaseBdev3", 00:09:46.331 "uuid": "ee216a36-1588-44df-a3ec-d7cb2bc438c1", 00:09:46.331 "is_configured": true, 00:09:46.331 "data_offset": 0, 00:09:46.331 "data_size": 65536 00:09:46.331 } 00:09:46.331 ] 00:09:46.331 }' 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.331 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.591 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:46.591 08:46:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.591 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.591 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.591 08:46:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.591 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:46.591 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:46.591 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.591 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.591 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.591 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 96d10d6b-a5d8-43da-8228-edce51041a7e 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.851 [2024-10-05 08:46:23.109616] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:46.851 [2024-10-05 08:46:23.109669] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:46.851 [2024-10-05 08:46:23.109676] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:46.851 [2024-10-05 08:46:23.109938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:46.851 [2024-10-05 08:46:23.110156] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:46.851 [2024-10-05 08:46:23.110171] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:46.851 [2024-10-05 08:46:23.110430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.851 NewBaseBdev 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.851 [ 00:09:46.851 { 00:09:46.851 "name": "NewBaseBdev", 00:09:46.851 "aliases": [ 00:09:46.851 "96d10d6b-a5d8-43da-8228-edce51041a7e" 00:09:46.851 ], 00:09:46.851 "product_name": "Malloc disk", 00:09:46.851 "block_size": 512, 00:09:46.851 "num_blocks": 65536, 00:09:46.851 "uuid": "96d10d6b-a5d8-43da-8228-edce51041a7e", 00:09:46.851 "assigned_rate_limits": { 00:09:46.851 "rw_ios_per_sec": 0, 00:09:46.851 "rw_mbytes_per_sec": 0, 00:09:46.851 "r_mbytes_per_sec": 0, 00:09:46.851 "w_mbytes_per_sec": 0 00:09:46.851 }, 00:09:46.851 "claimed": true, 00:09:46.851 "claim_type": "exclusive_write", 00:09:46.851 "zoned": false, 00:09:46.851 "supported_io_types": { 00:09:46.851 "read": true, 00:09:46.851 "write": true, 00:09:46.851 "unmap": true, 00:09:46.851 "flush": true, 00:09:46.851 "reset": true, 00:09:46.851 "nvme_admin": false, 00:09:46.851 "nvme_io": false, 00:09:46.851 "nvme_io_md": false, 00:09:46.851 "write_zeroes": true, 00:09:46.851 "zcopy": true, 00:09:46.851 "get_zone_info": false, 00:09:46.851 "zone_management": false, 00:09:46.851 "zone_append": false, 00:09:46.851 "compare": false, 00:09:46.851 "compare_and_write": false, 00:09:46.851 "abort": true, 00:09:46.851 "seek_hole": false, 00:09:46.851 "seek_data": false, 00:09:46.851 "copy": true, 00:09:46.851 "nvme_iov_md": false 00:09:46.851 }, 00:09:46.851 "memory_domains": [ 00:09:46.851 { 00:09:46.851 "dma_device_id": "system", 00:09:46.851 "dma_device_type": 1 00:09:46.851 }, 00:09:46.851 { 00:09:46.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.851 "dma_device_type": 2 00:09:46.851 } 00:09:46.851 ], 00:09:46.851 "driver_specific": {} 00:09:46.851 } 00:09:46.851 ] 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.851 "name": "Existed_Raid", 00:09:46.851 "uuid": "6e308624-020e-4fbc-b5a6-f4ea69b2fbfb", 00:09:46.851 "strip_size_kb": 0, 00:09:46.851 "state": "online", 00:09:46.851 "raid_level": "raid1", 00:09:46.851 "superblock": false, 00:09:46.851 "num_base_bdevs": 3, 00:09:46.851 "num_base_bdevs_discovered": 3, 00:09:46.851 "num_base_bdevs_operational": 3, 00:09:46.851 "base_bdevs_list": [ 00:09:46.851 { 00:09:46.851 "name": "NewBaseBdev", 00:09:46.851 "uuid": "96d10d6b-a5d8-43da-8228-edce51041a7e", 00:09:46.851 "is_configured": true, 00:09:46.851 "data_offset": 0, 00:09:46.851 "data_size": 65536 00:09:46.851 }, 00:09:46.851 { 00:09:46.851 "name": "BaseBdev2", 00:09:46.851 "uuid": "e2a4f13a-61b6-4feb-a1ac-0946bb89c9da", 00:09:46.851 "is_configured": true, 00:09:46.851 "data_offset": 0, 00:09:46.851 "data_size": 65536 00:09:46.851 }, 00:09:46.851 { 00:09:46.851 "name": "BaseBdev3", 00:09:46.851 "uuid": "ee216a36-1588-44df-a3ec-d7cb2bc438c1", 00:09:46.851 "is_configured": true, 00:09:46.851 "data_offset": 0, 00:09:46.851 "data_size": 65536 00:09:46.851 } 00:09:46.851 ] 00:09:46.851 }' 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.851 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.112 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:47.112 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:47.112 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.112 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.112 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.112 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.112 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:47.112 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.112 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.112 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.112 [2024-10-05 08:46:23.557242] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.112 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.372 "name": "Existed_Raid", 00:09:47.372 "aliases": [ 00:09:47.372 "6e308624-020e-4fbc-b5a6-f4ea69b2fbfb" 00:09:47.372 ], 00:09:47.372 "product_name": "Raid Volume", 00:09:47.372 "block_size": 512, 00:09:47.372 "num_blocks": 65536, 00:09:47.372 "uuid": "6e308624-020e-4fbc-b5a6-f4ea69b2fbfb", 00:09:47.372 "assigned_rate_limits": { 00:09:47.372 "rw_ios_per_sec": 0, 00:09:47.372 "rw_mbytes_per_sec": 0, 00:09:47.372 "r_mbytes_per_sec": 0, 00:09:47.372 "w_mbytes_per_sec": 0 00:09:47.372 }, 00:09:47.372 "claimed": false, 00:09:47.372 "zoned": false, 00:09:47.372 "supported_io_types": { 00:09:47.372 "read": true, 00:09:47.372 "write": true, 00:09:47.372 "unmap": false, 00:09:47.372 "flush": false, 00:09:47.372 "reset": true, 00:09:47.372 "nvme_admin": false, 00:09:47.372 "nvme_io": false, 00:09:47.372 "nvme_io_md": false, 00:09:47.372 "write_zeroes": true, 00:09:47.372 "zcopy": false, 00:09:47.372 "get_zone_info": false, 00:09:47.372 "zone_management": false, 00:09:47.372 "zone_append": false, 00:09:47.372 "compare": false, 00:09:47.372 "compare_and_write": false, 00:09:47.372 "abort": false, 00:09:47.372 "seek_hole": false, 00:09:47.372 "seek_data": false, 00:09:47.372 "copy": false, 00:09:47.372 "nvme_iov_md": false 00:09:47.372 }, 00:09:47.372 "memory_domains": [ 00:09:47.372 { 00:09:47.372 "dma_device_id": "system", 00:09:47.372 "dma_device_type": 1 00:09:47.372 }, 00:09:47.372 { 00:09:47.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.372 "dma_device_type": 2 00:09:47.372 }, 00:09:47.372 { 00:09:47.372 "dma_device_id": "system", 00:09:47.372 "dma_device_type": 1 00:09:47.372 }, 00:09:47.372 { 00:09:47.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.372 "dma_device_type": 2 00:09:47.372 }, 00:09:47.372 { 00:09:47.372 "dma_device_id": "system", 00:09:47.372 "dma_device_type": 1 00:09:47.372 }, 00:09:47.372 { 00:09:47.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.372 "dma_device_type": 2 00:09:47.372 } 00:09:47.372 ], 00:09:47.372 "driver_specific": { 00:09:47.372 "raid": { 00:09:47.372 "uuid": "6e308624-020e-4fbc-b5a6-f4ea69b2fbfb", 00:09:47.372 "strip_size_kb": 0, 00:09:47.372 "state": "online", 00:09:47.372 "raid_level": "raid1", 00:09:47.372 "superblock": false, 00:09:47.372 "num_base_bdevs": 3, 00:09:47.372 "num_base_bdevs_discovered": 3, 00:09:47.372 "num_base_bdevs_operational": 3, 00:09:47.372 "base_bdevs_list": [ 00:09:47.372 { 00:09:47.372 "name": "NewBaseBdev", 00:09:47.372 "uuid": "96d10d6b-a5d8-43da-8228-edce51041a7e", 00:09:47.372 "is_configured": true, 00:09:47.372 "data_offset": 0, 00:09:47.372 "data_size": 65536 00:09:47.372 }, 00:09:47.372 { 00:09:47.372 "name": "BaseBdev2", 00:09:47.372 "uuid": "e2a4f13a-61b6-4feb-a1ac-0946bb89c9da", 00:09:47.372 "is_configured": true, 00:09:47.372 "data_offset": 0, 00:09:47.372 "data_size": 65536 00:09:47.372 }, 00:09:47.372 { 00:09:47.372 "name": "BaseBdev3", 00:09:47.372 "uuid": "ee216a36-1588-44df-a3ec-d7cb2bc438c1", 00:09:47.372 "is_configured": true, 00:09:47.372 "data_offset": 0, 00:09:47.372 "data_size": 65536 00:09:47.372 } 00:09:47.372 ] 00:09:47.372 } 00:09:47.372 } 00:09:47.372 }' 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:47.372 BaseBdev2 00:09:47.372 BaseBdev3' 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:47.372 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.373 [2024-10-05 08:46:23.804450] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.373 [2024-10-05 08:46:23.804534] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.373 [2024-10-05 08:46:23.804631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.373 [2024-10-05 08:46:23.804973] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.373 [2024-10-05 08:46:23.805028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66470 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 66470 ']' 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 66470 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.373 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66470 00:09:47.632 killing process with pid 66470 00:09:47.632 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:47.632 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:47.632 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66470' 00:09:47.632 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 66470 00:09:47.632 [2024-10-05 08:46:23.852375] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.632 08:46:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 66470 00:09:47.892 [2024-10-05 08:46:24.175486] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:49.274 00:09:49.274 real 0m10.401s 00:09:49.274 user 0m16.034s 00:09:49.274 sys 0m1.942s 00:09:49.274 ************************************ 00:09:49.274 END TEST raid_state_function_test 00:09:49.274 ************************************ 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.274 08:46:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:49.274 08:46:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:49.274 08:46:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.274 08:46:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:49.274 ************************************ 00:09:49.274 START TEST raid_state_function_test_sb 00:09:49.274 ************************************ 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:49.274 Process raid pid: 67031 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67031 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67031' 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67031 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 67031 ']' 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.274 08:46:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.274 [2024-10-05 08:46:25.685297] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:09:49.274 [2024-10-05 08:46:25.685497] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.534 [2024-10-05 08:46:25.856773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.794 [2024-10-05 08:46:26.110300] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.063 [2024-10-05 08:46:26.331782] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.063 [2024-10-05 08:46:26.331848] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.327 [2024-10-05 08:46:26.551414] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.327 [2024-10-05 08:46:26.551543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.327 [2024-10-05 08:46:26.551559] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.327 [2024-10-05 08:46:26.551572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.327 [2024-10-05 08:46:26.551578] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.327 [2024-10-05 08:46:26.551588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.327 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.327 "name": "Existed_Raid", 00:09:50.327 "uuid": "6e3cee81-9763-43a5-acb9-486e33578897", 00:09:50.327 "strip_size_kb": 0, 00:09:50.327 "state": "configuring", 00:09:50.327 "raid_level": "raid1", 00:09:50.327 "superblock": true, 00:09:50.327 "num_base_bdevs": 3, 00:09:50.327 "num_base_bdevs_discovered": 0, 00:09:50.328 "num_base_bdevs_operational": 3, 00:09:50.328 "base_bdevs_list": [ 00:09:50.328 { 00:09:50.328 "name": "BaseBdev1", 00:09:50.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.328 "is_configured": false, 00:09:50.328 "data_offset": 0, 00:09:50.328 "data_size": 0 00:09:50.328 }, 00:09:50.328 { 00:09:50.328 "name": "BaseBdev2", 00:09:50.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.328 "is_configured": false, 00:09:50.328 "data_offset": 0, 00:09:50.328 "data_size": 0 00:09:50.328 }, 00:09:50.328 { 00:09:50.328 "name": "BaseBdev3", 00:09:50.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.328 "is_configured": false, 00:09:50.328 "data_offset": 0, 00:09:50.328 "data_size": 0 00:09:50.328 } 00:09:50.328 ] 00:09:50.328 }' 00:09:50.328 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.328 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.587 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:50.587 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.587 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.587 [2024-10-05 08:46:26.990545] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:50.587 [2024-10-05 08:46:26.990642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:50.587 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.587 08:46:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:50.588 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.588 08:46:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.588 [2024-10-05 08:46:26.998555] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:50.588 [2024-10-05 08:46:26.998642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:50.588 [2024-10-05 08:46:26.998679] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:50.588 [2024-10-05 08:46:26.998701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:50.588 [2024-10-05 08:46:26.998718] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:50.588 [2024-10-05 08:46:26.998739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:50.588 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.588 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.588 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.588 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.848 [2024-10-05 08:46:27.059779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.848 BaseBdev1 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.848 [ 00:09:50.848 { 00:09:50.848 "name": "BaseBdev1", 00:09:50.848 "aliases": [ 00:09:50.848 "418ca457-b749-4751-b694-48a9cb62eae6" 00:09:50.848 ], 00:09:50.848 "product_name": "Malloc disk", 00:09:50.848 "block_size": 512, 00:09:50.848 "num_blocks": 65536, 00:09:50.848 "uuid": "418ca457-b749-4751-b694-48a9cb62eae6", 00:09:50.848 "assigned_rate_limits": { 00:09:50.848 "rw_ios_per_sec": 0, 00:09:50.848 "rw_mbytes_per_sec": 0, 00:09:50.848 "r_mbytes_per_sec": 0, 00:09:50.848 "w_mbytes_per_sec": 0 00:09:50.848 }, 00:09:50.848 "claimed": true, 00:09:50.848 "claim_type": "exclusive_write", 00:09:50.848 "zoned": false, 00:09:50.848 "supported_io_types": { 00:09:50.848 "read": true, 00:09:50.848 "write": true, 00:09:50.848 "unmap": true, 00:09:50.848 "flush": true, 00:09:50.848 "reset": true, 00:09:50.848 "nvme_admin": false, 00:09:50.848 "nvme_io": false, 00:09:50.848 "nvme_io_md": false, 00:09:50.848 "write_zeroes": true, 00:09:50.848 "zcopy": true, 00:09:50.848 "get_zone_info": false, 00:09:50.848 "zone_management": false, 00:09:50.848 "zone_append": false, 00:09:50.848 "compare": false, 00:09:50.848 "compare_and_write": false, 00:09:50.848 "abort": true, 00:09:50.848 "seek_hole": false, 00:09:50.848 "seek_data": false, 00:09:50.848 "copy": true, 00:09:50.848 "nvme_iov_md": false 00:09:50.848 }, 00:09:50.848 "memory_domains": [ 00:09:50.848 { 00:09:50.848 "dma_device_id": "system", 00:09:50.848 "dma_device_type": 1 00:09:50.848 }, 00:09:50.848 { 00:09:50.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.848 "dma_device_type": 2 00:09:50.848 } 00:09:50.848 ], 00:09:50.848 "driver_specific": {} 00:09:50.848 } 00:09:50.848 ] 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:50.848 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.849 "name": "Existed_Raid", 00:09:50.849 "uuid": "fc429da3-faa3-4b01-ad61-f027bed710f9", 00:09:50.849 "strip_size_kb": 0, 00:09:50.849 "state": "configuring", 00:09:50.849 "raid_level": "raid1", 00:09:50.849 "superblock": true, 00:09:50.849 "num_base_bdevs": 3, 00:09:50.849 "num_base_bdevs_discovered": 1, 00:09:50.849 "num_base_bdevs_operational": 3, 00:09:50.849 "base_bdevs_list": [ 00:09:50.849 { 00:09:50.849 "name": "BaseBdev1", 00:09:50.849 "uuid": "418ca457-b749-4751-b694-48a9cb62eae6", 00:09:50.849 "is_configured": true, 00:09:50.849 "data_offset": 2048, 00:09:50.849 "data_size": 63488 00:09:50.849 }, 00:09:50.849 { 00:09:50.849 "name": "BaseBdev2", 00:09:50.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.849 "is_configured": false, 00:09:50.849 "data_offset": 0, 00:09:50.849 "data_size": 0 00:09:50.849 }, 00:09:50.849 { 00:09:50.849 "name": "BaseBdev3", 00:09:50.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.849 "is_configured": false, 00:09:50.849 "data_offset": 0, 00:09:50.849 "data_size": 0 00:09:50.849 } 00:09:50.849 ] 00:09:50.849 }' 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.849 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.111 [2024-10-05 08:46:27.507051] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.111 [2024-10-05 08:46:27.507103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.111 [2024-10-05 08:46:27.515089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.111 [2024-10-05 08:46:27.517166] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:51.111 [2024-10-05 08:46:27.517209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:51.111 [2024-10-05 08:46:27.517219] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:51.111 [2024-10-05 08:46:27.517228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.111 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.111 "name": "Existed_Raid", 00:09:51.111 "uuid": "1dcbcf5f-9042-48dd-9620-847cb275bef6", 00:09:51.111 "strip_size_kb": 0, 00:09:51.111 "state": "configuring", 00:09:51.111 "raid_level": "raid1", 00:09:51.111 "superblock": true, 00:09:51.111 "num_base_bdevs": 3, 00:09:51.111 "num_base_bdevs_discovered": 1, 00:09:51.111 "num_base_bdevs_operational": 3, 00:09:51.111 "base_bdevs_list": [ 00:09:51.111 { 00:09:51.111 "name": "BaseBdev1", 00:09:51.111 "uuid": "418ca457-b749-4751-b694-48a9cb62eae6", 00:09:51.111 "is_configured": true, 00:09:51.111 "data_offset": 2048, 00:09:51.112 "data_size": 63488 00:09:51.112 }, 00:09:51.112 { 00:09:51.112 "name": "BaseBdev2", 00:09:51.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.112 "is_configured": false, 00:09:51.112 "data_offset": 0, 00:09:51.112 "data_size": 0 00:09:51.112 }, 00:09:51.112 { 00:09:51.112 "name": "BaseBdev3", 00:09:51.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.112 "is_configured": false, 00:09:51.112 "data_offset": 0, 00:09:51.112 "data_size": 0 00:09:51.112 } 00:09:51.112 ] 00:09:51.112 }' 00:09:51.112 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.112 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.683 [2024-10-05 08:46:27.967113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.683 BaseBdev2 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.683 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.684 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.684 08:46:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.684 [ 00:09:51.684 { 00:09:51.684 "name": "BaseBdev2", 00:09:51.684 "aliases": [ 00:09:51.684 "64f7ec1b-2e73-44af-b669-098ceaa369e4" 00:09:51.684 ], 00:09:51.684 "product_name": "Malloc disk", 00:09:51.684 "block_size": 512, 00:09:51.684 "num_blocks": 65536, 00:09:51.684 "uuid": "64f7ec1b-2e73-44af-b669-098ceaa369e4", 00:09:51.684 "assigned_rate_limits": { 00:09:51.684 "rw_ios_per_sec": 0, 00:09:51.684 "rw_mbytes_per_sec": 0, 00:09:51.684 "r_mbytes_per_sec": 0, 00:09:51.684 "w_mbytes_per_sec": 0 00:09:51.684 }, 00:09:51.684 "claimed": true, 00:09:51.684 "claim_type": "exclusive_write", 00:09:51.684 "zoned": false, 00:09:51.684 "supported_io_types": { 00:09:51.684 "read": true, 00:09:51.684 "write": true, 00:09:51.684 "unmap": true, 00:09:51.684 "flush": true, 00:09:51.684 "reset": true, 00:09:51.684 "nvme_admin": false, 00:09:51.684 "nvme_io": false, 00:09:51.684 "nvme_io_md": false, 00:09:51.684 "write_zeroes": true, 00:09:51.684 "zcopy": true, 00:09:51.684 "get_zone_info": false, 00:09:51.684 "zone_management": false, 00:09:51.684 "zone_append": false, 00:09:51.684 "compare": false, 00:09:51.684 "compare_and_write": false, 00:09:51.684 "abort": true, 00:09:51.684 "seek_hole": false, 00:09:51.684 "seek_data": false, 00:09:51.684 "copy": true, 00:09:51.684 "nvme_iov_md": false 00:09:51.684 }, 00:09:51.684 "memory_domains": [ 00:09:51.684 { 00:09:51.684 "dma_device_id": "system", 00:09:51.684 "dma_device_type": 1 00:09:51.684 }, 00:09:51.684 { 00:09:51.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.684 "dma_device_type": 2 00:09:51.684 } 00:09:51.684 ], 00:09:51.684 "driver_specific": {} 00:09:51.684 } 00:09:51.684 ] 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.684 "name": "Existed_Raid", 00:09:51.684 "uuid": "1dcbcf5f-9042-48dd-9620-847cb275bef6", 00:09:51.684 "strip_size_kb": 0, 00:09:51.684 "state": "configuring", 00:09:51.684 "raid_level": "raid1", 00:09:51.684 "superblock": true, 00:09:51.684 "num_base_bdevs": 3, 00:09:51.684 "num_base_bdevs_discovered": 2, 00:09:51.684 "num_base_bdevs_operational": 3, 00:09:51.684 "base_bdevs_list": [ 00:09:51.684 { 00:09:51.684 "name": "BaseBdev1", 00:09:51.684 "uuid": "418ca457-b749-4751-b694-48a9cb62eae6", 00:09:51.684 "is_configured": true, 00:09:51.684 "data_offset": 2048, 00:09:51.684 "data_size": 63488 00:09:51.684 }, 00:09:51.684 { 00:09:51.684 "name": "BaseBdev2", 00:09:51.684 "uuid": "64f7ec1b-2e73-44af-b669-098ceaa369e4", 00:09:51.684 "is_configured": true, 00:09:51.684 "data_offset": 2048, 00:09:51.684 "data_size": 63488 00:09:51.684 }, 00:09:51.684 { 00:09:51.684 "name": "BaseBdev3", 00:09:51.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.684 "is_configured": false, 00:09:51.684 "data_offset": 0, 00:09:51.684 "data_size": 0 00:09:51.684 } 00:09:51.684 ] 00:09:51.684 }' 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.684 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.943 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:51.943 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.943 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.201 [2024-10-05 08:46:28.422181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.201 [2024-10-05 08:46:28.422552] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:52.201 [2024-10-05 08:46:28.422625] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:52.202 [2024-10-05 08:46:28.422941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:52.202 [2024-10-05 08:46:28.423148] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:52.202 [2024-10-05 08:46:28.423188] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:52.202 BaseBdev3 00:09:52.202 [2024-10-05 08:46:28.423374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.202 [ 00:09:52.202 { 00:09:52.202 "name": "BaseBdev3", 00:09:52.202 "aliases": [ 00:09:52.202 "0dd92d55-2aa1-453d-a3c0-4f7129e9f8dd" 00:09:52.202 ], 00:09:52.202 "product_name": "Malloc disk", 00:09:52.202 "block_size": 512, 00:09:52.202 "num_blocks": 65536, 00:09:52.202 "uuid": "0dd92d55-2aa1-453d-a3c0-4f7129e9f8dd", 00:09:52.202 "assigned_rate_limits": { 00:09:52.202 "rw_ios_per_sec": 0, 00:09:52.202 "rw_mbytes_per_sec": 0, 00:09:52.202 "r_mbytes_per_sec": 0, 00:09:52.202 "w_mbytes_per_sec": 0 00:09:52.202 }, 00:09:52.202 "claimed": true, 00:09:52.202 "claim_type": "exclusive_write", 00:09:52.202 "zoned": false, 00:09:52.202 "supported_io_types": { 00:09:52.202 "read": true, 00:09:52.202 "write": true, 00:09:52.202 "unmap": true, 00:09:52.202 "flush": true, 00:09:52.202 "reset": true, 00:09:52.202 "nvme_admin": false, 00:09:52.202 "nvme_io": false, 00:09:52.202 "nvme_io_md": false, 00:09:52.202 "write_zeroes": true, 00:09:52.202 "zcopy": true, 00:09:52.202 "get_zone_info": false, 00:09:52.202 "zone_management": false, 00:09:52.202 "zone_append": false, 00:09:52.202 "compare": false, 00:09:52.202 "compare_and_write": false, 00:09:52.202 "abort": true, 00:09:52.202 "seek_hole": false, 00:09:52.202 "seek_data": false, 00:09:52.202 "copy": true, 00:09:52.202 "nvme_iov_md": false 00:09:52.202 }, 00:09:52.202 "memory_domains": [ 00:09:52.202 { 00:09:52.202 "dma_device_id": "system", 00:09:52.202 "dma_device_type": 1 00:09:52.202 }, 00:09:52.202 { 00:09:52.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.202 "dma_device_type": 2 00:09:52.202 } 00:09:52.202 ], 00:09:52.202 "driver_specific": {} 00:09:52.202 } 00:09:52.202 ] 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.202 "name": "Existed_Raid", 00:09:52.202 "uuid": "1dcbcf5f-9042-48dd-9620-847cb275bef6", 00:09:52.202 "strip_size_kb": 0, 00:09:52.202 "state": "online", 00:09:52.202 "raid_level": "raid1", 00:09:52.202 "superblock": true, 00:09:52.202 "num_base_bdevs": 3, 00:09:52.202 "num_base_bdevs_discovered": 3, 00:09:52.202 "num_base_bdevs_operational": 3, 00:09:52.202 "base_bdevs_list": [ 00:09:52.202 { 00:09:52.202 "name": "BaseBdev1", 00:09:52.202 "uuid": "418ca457-b749-4751-b694-48a9cb62eae6", 00:09:52.202 "is_configured": true, 00:09:52.202 "data_offset": 2048, 00:09:52.202 "data_size": 63488 00:09:52.202 }, 00:09:52.202 { 00:09:52.202 "name": "BaseBdev2", 00:09:52.202 "uuid": "64f7ec1b-2e73-44af-b669-098ceaa369e4", 00:09:52.202 "is_configured": true, 00:09:52.202 "data_offset": 2048, 00:09:52.202 "data_size": 63488 00:09:52.202 }, 00:09:52.202 { 00:09:52.202 "name": "BaseBdev3", 00:09:52.202 "uuid": "0dd92d55-2aa1-453d-a3c0-4f7129e9f8dd", 00:09:52.202 "is_configured": true, 00:09:52.202 "data_offset": 2048, 00:09:52.202 "data_size": 63488 00:09:52.202 } 00:09:52.202 ] 00:09:52.202 }' 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.202 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.461 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:52.461 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:52.461 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:52.461 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:52.461 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:52.461 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:52.461 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:52.461 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:52.461 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.461 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.461 [2024-10-05 08:46:28.897736] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.461 08:46:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.461 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:52.461 "name": "Existed_Raid", 00:09:52.461 "aliases": [ 00:09:52.461 "1dcbcf5f-9042-48dd-9620-847cb275bef6" 00:09:52.461 ], 00:09:52.461 "product_name": "Raid Volume", 00:09:52.461 "block_size": 512, 00:09:52.461 "num_blocks": 63488, 00:09:52.461 "uuid": "1dcbcf5f-9042-48dd-9620-847cb275bef6", 00:09:52.461 "assigned_rate_limits": { 00:09:52.461 "rw_ios_per_sec": 0, 00:09:52.461 "rw_mbytes_per_sec": 0, 00:09:52.461 "r_mbytes_per_sec": 0, 00:09:52.461 "w_mbytes_per_sec": 0 00:09:52.461 }, 00:09:52.461 "claimed": false, 00:09:52.461 "zoned": false, 00:09:52.461 "supported_io_types": { 00:09:52.461 "read": true, 00:09:52.461 "write": true, 00:09:52.461 "unmap": false, 00:09:52.461 "flush": false, 00:09:52.461 "reset": true, 00:09:52.461 "nvme_admin": false, 00:09:52.461 "nvme_io": false, 00:09:52.461 "nvme_io_md": false, 00:09:52.461 "write_zeroes": true, 00:09:52.461 "zcopy": false, 00:09:52.461 "get_zone_info": false, 00:09:52.461 "zone_management": false, 00:09:52.461 "zone_append": false, 00:09:52.461 "compare": false, 00:09:52.461 "compare_and_write": false, 00:09:52.461 "abort": false, 00:09:52.461 "seek_hole": false, 00:09:52.461 "seek_data": false, 00:09:52.461 "copy": false, 00:09:52.461 "nvme_iov_md": false 00:09:52.461 }, 00:09:52.461 "memory_domains": [ 00:09:52.461 { 00:09:52.461 "dma_device_id": "system", 00:09:52.461 "dma_device_type": 1 00:09:52.461 }, 00:09:52.461 { 00:09:52.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.461 "dma_device_type": 2 00:09:52.461 }, 00:09:52.461 { 00:09:52.461 "dma_device_id": "system", 00:09:52.461 "dma_device_type": 1 00:09:52.461 }, 00:09:52.461 { 00:09:52.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.461 "dma_device_type": 2 00:09:52.461 }, 00:09:52.461 { 00:09:52.461 "dma_device_id": "system", 00:09:52.461 "dma_device_type": 1 00:09:52.461 }, 00:09:52.461 { 00:09:52.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.461 "dma_device_type": 2 00:09:52.461 } 00:09:52.461 ], 00:09:52.461 "driver_specific": { 00:09:52.461 "raid": { 00:09:52.462 "uuid": "1dcbcf5f-9042-48dd-9620-847cb275bef6", 00:09:52.462 "strip_size_kb": 0, 00:09:52.462 "state": "online", 00:09:52.462 "raid_level": "raid1", 00:09:52.462 "superblock": true, 00:09:52.462 "num_base_bdevs": 3, 00:09:52.462 "num_base_bdevs_discovered": 3, 00:09:52.462 "num_base_bdevs_operational": 3, 00:09:52.462 "base_bdevs_list": [ 00:09:52.462 { 00:09:52.462 "name": "BaseBdev1", 00:09:52.462 "uuid": "418ca457-b749-4751-b694-48a9cb62eae6", 00:09:52.462 "is_configured": true, 00:09:52.462 "data_offset": 2048, 00:09:52.462 "data_size": 63488 00:09:52.462 }, 00:09:52.462 { 00:09:52.462 "name": "BaseBdev2", 00:09:52.462 "uuid": "64f7ec1b-2e73-44af-b669-098ceaa369e4", 00:09:52.462 "is_configured": true, 00:09:52.462 "data_offset": 2048, 00:09:52.462 "data_size": 63488 00:09:52.462 }, 00:09:52.462 { 00:09:52.462 "name": "BaseBdev3", 00:09:52.462 "uuid": "0dd92d55-2aa1-453d-a3c0-4f7129e9f8dd", 00:09:52.462 "is_configured": true, 00:09:52.462 "data_offset": 2048, 00:09:52.462 "data_size": 63488 00:09:52.462 } 00:09:52.462 ] 00:09:52.462 } 00:09:52.462 } 00:09:52.462 }' 00:09:52.462 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.722 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:52.722 BaseBdev2 00:09:52.722 BaseBdev3' 00:09:52.722 08:46:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.722 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.722 [2024-10-05 08:46:29.145016] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.982 "name": "Existed_Raid", 00:09:52.982 "uuid": "1dcbcf5f-9042-48dd-9620-847cb275bef6", 00:09:52.982 "strip_size_kb": 0, 00:09:52.982 "state": "online", 00:09:52.982 "raid_level": "raid1", 00:09:52.982 "superblock": true, 00:09:52.982 "num_base_bdevs": 3, 00:09:52.982 "num_base_bdevs_discovered": 2, 00:09:52.982 "num_base_bdevs_operational": 2, 00:09:52.982 "base_bdevs_list": [ 00:09:52.982 { 00:09:52.982 "name": null, 00:09:52.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.982 "is_configured": false, 00:09:52.982 "data_offset": 0, 00:09:52.982 "data_size": 63488 00:09:52.982 }, 00:09:52.982 { 00:09:52.982 "name": "BaseBdev2", 00:09:52.982 "uuid": "64f7ec1b-2e73-44af-b669-098ceaa369e4", 00:09:52.982 "is_configured": true, 00:09:52.982 "data_offset": 2048, 00:09:52.982 "data_size": 63488 00:09:52.982 }, 00:09:52.982 { 00:09:52.982 "name": "BaseBdev3", 00:09:52.982 "uuid": "0dd92d55-2aa1-453d-a3c0-4f7129e9f8dd", 00:09:52.982 "is_configured": true, 00:09:52.982 "data_offset": 2048, 00:09:52.982 "data_size": 63488 00:09:52.982 } 00:09:52.982 ] 00:09:52.982 }' 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.982 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.241 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:53.241 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.241 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:53.241 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.241 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.241 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.241 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.241 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:53.241 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:53.241 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:53.241 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.241 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.241 [2024-10-05 08:46:29.693907] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.502 [2024-10-05 08:46:29.851737] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:53.502 [2024-10-05 08:46:29.851905] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.502 [2024-10-05 08:46:29.954254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.502 [2024-10-05 08:46:29.954374] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.502 [2024-10-05 08:46:29.954417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.502 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.762 08:46:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.762 BaseBdev2 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.762 [ 00:09:53.762 { 00:09:53.762 "name": "BaseBdev2", 00:09:53.762 "aliases": [ 00:09:53.762 "88686b7b-2358-40ca-8830-5aed439ccae5" 00:09:53.762 ], 00:09:53.762 "product_name": "Malloc disk", 00:09:53.762 "block_size": 512, 00:09:53.762 "num_blocks": 65536, 00:09:53.762 "uuid": "88686b7b-2358-40ca-8830-5aed439ccae5", 00:09:53.762 "assigned_rate_limits": { 00:09:53.762 "rw_ios_per_sec": 0, 00:09:53.762 "rw_mbytes_per_sec": 0, 00:09:53.762 "r_mbytes_per_sec": 0, 00:09:53.762 "w_mbytes_per_sec": 0 00:09:53.762 }, 00:09:53.762 "claimed": false, 00:09:53.762 "zoned": false, 00:09:53.762 "supported_io_types": { 00:09:53.762 "read": true, 00:09:53.762 "write": true, 00:09:53.762 "unmap": true, 00:09:53.762 "flush": true, 00:09:53.762 "reset": true, 00:09:53.762 "nvme_admin": false, 00:09:53.762 "nvme_io": false, 00:09:53.762 "nvme_io_md": false, 00:09:53.762 "write_zeroes": true, 00:09:53.762 "zcopy": true, 00:09:53.762 "get_zone_info": false, 00:09:53.762 "zone_management": false, 00:09:53.762 "zone_append": false, 00:09:53.762 "compare": false, 00:09:53.762 "compare_and_write": false, 00:09:53.762 "abort": true, 00:09:53.762 "seek_hole": false, 00:09:53.762 "seek_data": false, 00:09:53.762 "copy": true, 00:09:53.762 "nvme_iov_md": false 00:09:53.762 }, 00:09:53.762 "memory_domains": [ 00:09:53.762 { 00:09:53.762 "dma_device_id": "system", 00:09:53.762 "dma_device_type": 1 00:09:53.762 }, 00:09:53.762 { 00:09:53.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.762 "dma_device_type": 2 00:09:53.762 } 00:09:53.762 ], 00:09:53.762 "driver_specific": {} 00:09:53.762 } 00:09:53.762 ] 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.762 BaseBdev3 00:09:53.762 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.763 [ 00:09:53.763 { 00:09:53.763 "name": "BaseBdev3", 00:09:53.763 "aliases": [ 00:09:53.763 "40724a2b-40db-4342-80d8-4d7f0e27a239" 00:09:53.763 ], 00:09:53.763 "product_name": "Malloc disk", 00:09:53.763 "block_size": 512, 00:09:53.763 "num_blocks": 65536, 00:09:53.763 "uuid": "40724a2b-40db-4342-80d8-4d7f0e27a239", 00:09:53.763 "assigned_rate_limits": { 00:09:53.763 "rw_ios_per_sec": 0, 00:09:53.763 "rw_mbytes_per_sec": 0, 00:09:53.763 "r_mbytes_per_sec": 0, 00:09:53.763 "w_mbytes_per_sec": 0 00:09:53.763 }, 00:09:53.763 "claimed": false, 00:09:53.763 "zoned": false, 00:09:53.763 "supported_io_types": { 00:09:53.763 "read": true, 00:09:53.763 "write": true, 00:09:53.763 "unmap": true, 00:09:53.763 "flush": true, 00:09:53.763 "reset": true, 00:09:53.763 "nvme_admin": false, 00:09:53.763 "nvme_io": false, 00:09:53.763 "nvme_io_md": false, 00:09:53.763 "write_zeroes": true, 00:09:53.763 "zcopy": true, 00:09:53.763 "get_zone_info": false, 00:09:53.763 "zone_management": false, 00:09:53.763 "zone_append": false, 00:09:53.763 "compare": false, 00:09:53.763 "compare_and_write": false, 00:09:53.763 "abort": true, 00:09:53.763 "seek_hole": false, 00:09:53.763 "seek_data": false, 00:09:53.763 "copy": true, 00:09:53.763 "nvme_iov_md": false 00:09:53.763 }, 00:09:53.763 "memory_domains": [ 00:09:53.763 { 00:09:53.763 "dma_device_id": "system", 00:09:53.763 "dma_device_type": 1 00:09:53.763 }, 00:09:53.763 { 00:09:53.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.763 "dma_device_type": 2 00:09:53.763 } 00:09:53.763 ], 00:09:53.763 "driver_specific": {} 00:09:53.763 } 00:09:53.763 ] 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.763 [2024-10-05 08:46:30.178829] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.763 [2024-10-05 08:46:30.178950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.763 [2024-10-05 08:46:30.179001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.763 [2024-10-05 08:46:30.181084] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.763 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.024 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.024 "name": "Existed_Raid", 00:09:54.024 "uuid": "74bef482-79e0-4c95-961e-3fc70be8e53d", 00:09:54.024 "strip_size_kb": 0, 00:09:54.024 "state": "configuring", 00:09:54.024 "raid_level": "raid1", 00:09:54.024 "superblock": true, 00:09:54.024 "num_base_bdevs": 3, 00:09:54.024 "num_base_bdevs_discovered": 2, 00:09:54.024 "num_base_bdevs_operational": 3, 00:09:54.024 "base_bdevs_list": [ 00:09:54.024 { 00:09:54.024 "name": "BaseBdev1", 00:09:54.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.024 "is_configured": false, 00:09:54.024 "data_offset": 0, 00:09:54.024 "data_size": 0 00:09:54.024 }, 00:09:54.024 { 00:09:54.024 "name": "BaseBdev2", 00:09:54.024 "uuid": "88686b7b-2358-40ca-8830-5aed439ccae5", 00:09:54.024 "is_configured": true, 00:09:54.024 "data_offset": 2048, 00:09:54.024 "data_size": 63488 00:09:54.024 }, 00:09:54.024 { 00:09:54.024 "name": "BaseBdev3", 00:09:54.024 "uuid": "40724a2b-40db-4342-80d8-4d7f0e27a239", 00:09:54.024 "is_configured": true, 00:09:54.024 "data_offset": 2048, 00:09:54.024 "data_size": 63488 00:09:54.024 } 00:09:54.024 ] 00:09:54.024 }' 00:09:54.024 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.024 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.284 [2024-10-05 08:46:30.618055] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.284 "name": "Existed_Raid", 00:09:54.284 "uuid": "74bef482-79e0-4c95-961e-3fc70be8e53d", 00:09:54.284 "strip_size_kb": 0, 00:09:54.284 "state": "configuring", 00:09:54.284 "raid_level": "raid1", 00:09:54.284 "superblock": true, 00:09:54.284 "num_base_bdevs": 3, 00:09:54.284 "num_base_bdevs_discovered": 1, 00:09:54.284 "num_base_bdevs_operational": 3, 00:09:54.284 "base_bdevs_list": [ 00:09:54.284 { 00:09:54.284 "name": "BaseBdev1", 00:09:54.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.284 "is_configured": false, 00:09:54.284 "data_offset": 0, 00:09:54.284 "data_size": 0 00:09:54.284 }, 00:09:54.284 { 00:09:54.284 "name": null, 00:09:54.284 "uuid": "88686b7b-2358-40ca-8830-5aed439ccae5", 00:09:54.284 "is_configured": false, 00:09:54.284 "data_offset": 0, 00:09:54.284 "data_size": 63488 00:09:54.284 }, 00:09:54.284 { 00:09:54.284 "name": "BaseBdev3", 00:09:54.284 "uuid": "40724a2b-40db-4342-80d8-4d7f0e27a239", 00:09:54.284 "is_configured": true, 00:09:54.284 "data_offset": 2048, 00:09:54.284 "data_size": 63488 00:09:54.284 } 00:09:54.284 ] 00:09:54.284 }' 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.284 08:46:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.855 [2024-10-05 08:46:31.175375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.855 BaseBdev1 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.855 [ 00:09:54.855 { 00:09:54.855 "name": "BaseBdev1", 00:09:54.855 "aliases": [ 00:09:54.855 "3dc29164-9c05-437f-9810-9b5501e5bf3e" 00:09:54.855 ], 00:09:54.855 "product_name": "Malloc disk", 00:09:54.855 "block_size": 512, 00:09:54.855 "num_blocks": 65536, 00:09:54.855 "uuid": "3dc29164-9c05-437f-9810-9b5501e5bf3e", 00:09:54.855 "assigned_rate_limits": { 00:09:54.855 "rw_ios_per_sec": 0, 00:09:54.855 "rw_mbytes_per_sec": 0, 00:09:54.855 "r_mbytes_per_sec": 0, 00:09:54.855 "w_mbytes_per_sec": 0 00:09:54.855 }, 00:09:54.855 "claimed": true, 00:09:54.855 "claim_type": "exclusive_write", 00:09:54.855 "zoned": false, 00:09:54.855 "supported_io_types": { 00:09:54.855 "read": true, 00:09:54.855 "write": true, 00:09:54.855 "unmap": true, 00:09:54.855 "flush": true, 00:09:54.855 "reset": true, 00:09:54.855 "nvme_admin": false, 00:09:54.855 "nvme_io": false, 00:09:54.855 "nvme_io_md": false, 00:09:54.855 "write_zeroes": true, 00:09:54.855 "zcopy": true, 00:09:54.855 "get_zone_info": false, 00:09:54.855 "zone_management": false, 00:09:54.855 "zone_append": false, 00:09:54.855 "compare": false, 00:09:54.855 "compare_and_write": false, 00:09:54.855 "abort": true, 00:09:54.855 "seek_hole": false, 00:09:54.855 "seek_data": false, 00:09:54.855 "copy": true, 00:09:54.855 "nvme_iov_md": false 00:09:54.855 }, 00:09:54.855 "memory_domains": [ 00:09:54.855 { 00:09:54.855 "dma_device_id": "system", 00:09:54.855 "dma_device_type": 1 00:09:54.855 }, 00:09:54.855 { 00:09:54.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.855 "dma_device_type": 2 00:09:54.855 } 00:09:54.855 ], 00:09:54.855 "driver_specific": {} 00:09:54.855 } 00:09:54.855 ] 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.855 "name": "Existed_Raid", 00:09:54.855 "uuid": "74bef482-79e0-4c95-961e-3fc70be8e53d", 00:09:54.855 "strip_size_kb": 0, 00:09:54.855 "state": "configuring", 00:09:54.855 "raid_level": "raid1", 00:09:54.855 "superblock": true, 00:09:54.855 "num_base_bdevs": 3, 00:09:54.855 "num_base_bdevs_discovered": 2, 00:09:54.855 "num_base_bdevs_operational": 3, 00:09:54.855 "base_bdevs_list": [ 00:09:54.855 { 00:09:54.855 "name": "BaseBdev1", 00:09:54.855 "uuid": "3dc29164-9c05-437f-9810-9b5501e5bf3e", 00:09:54.855 "is_configured": true, 00:09:54.855 "data_offset": 2048, 00:09:54.855 "data_size": 63488 00:09:54.855 }, 00:09:54.855 { 00:09:54.855 "name": null, 00:09:54.855 "uuid": "88686b7b-2358-40ca-8830-5aed439ccae5", 00:09:54.855 "is_configured": false, 00:09:54.855 "data_offset": 0, 00:09:54.855 "data_size": 63488 00:09:54.855 }, 00:09:54.855 { 00:09:54.855 "name": "BaseBdev3", 00:09:54.855 "uuid": "40724a2b-40db-4342-80d8-4d7f0e27a239", 00:09:54.855 "is_configured": true, 00:09:54.855 "data_offset": 2048, 00:09:54.855 "data_size": 63488 00:09:54.855 } 00:09:54.855 ] 00:09:54.855 }' 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.855 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.424 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.424 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.424 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.424 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:55.424 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.424 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:55.424 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.425 [2024-10-05 08:46:31.706503] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.425 "name": "Existed_Raid", 00:09:55.425 "uuid": "74bef482-79e0-4c95-961e-3fc70be8e53d", 00:09:55.425 "strip_size_kb": 0, 00:09:55.425 "state": "configuring", 00:09:55.425 "raid_level": "raid1", 00:09:55.425 "superblock": true, 00:09:55.425 "num_base_bdevs": 3, 00:09:55.425 "num_base_bdevs_discovered": 1, 00:09:55.425 "num_base_bdevs_operational": 3, 00:09:55.425 "base_bdevs_list": [ 00:09:55.425 { 00:09:55.425 "name": "BaseBdev1", 00:09:55.425 "uuid": "3dc29164-9c05-437f-9810-9b5501e5bf3e", 00:09:55.425 "is_configured": true, 00:09:55.425 "data_offset": 2048, 00:09:55.425 "data_size": 63488 00:09:55.425 }, 00:09:55.425 { 00:09:55.425 "name": null, 00:09:55.425 "uuid": "88686b7b-2358-40ca-8830-5aed439ccae5", 00:09:55.425 "is_configured": false, 00:09:55.425 "data_offset": 0, 00:09:55.425 "data_size": 63488 00:09:55.425 }, 00:09:55.425 { 00:09:55.425 "name": null, 00:09:55.425 "uuid": "40724a2b-40db-4342-80d8-4d7f0e27a239", 00:09:55.425 "is_configured": false, 00:09:55.425 "data_offset": 0, 00:09:55.425 "data_size": 63488 00:09:55.425 } 00:09:55.425 ] 00:09:55.425 }' 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.425 08:46:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.994 [2024-10-05 08:46:32.213642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.994 "name": "Existed_Raid", 00:09:55.994 "uuid": "74bef482-79e0-4c95-961e-3fc70be8e53d", 00:09:55.994 "strip_size_kb": 0, 00:09:55.994 "state": "configuring", 00:09:55.994 "raid_level": "raid1", 00:09:55.994 "superblock": true, 00:09:55.994 "num_base_bdevs": 3, 00:09:55.994 "num_base_bdevs_discovered": 2, 00:09:55.994 "num_base_bdevs_operational": 3, 00:09:55.994 "base_bdevs_list": [ 00:09:55.994 { 00:09:55.994 "name": "BaseBdev1", 00:09:55.994 "uuid": "3dc29164-9c05-437f-9810-9b5501e5bf3e", 00:09:55.994 "is_configured": true, 00:09:55.994 "data_offset": 2048, 00:09:55.994 "data_size": 63488 00:09:55.994 }, 00:09:55.994 { 00:09:55.994 "name": null, 00:09:55.994 "uuid": "88686b7b-2358-40ca-8830-5aed439ccae5", 00:09:55.994 "is_configured": false, 00:09:55.994 "data_offset": 0, 00:09:55.994 "data_size": 63488 00:09:55.994 }, 00:09:55.994 { 00:09:55.994 "name": "BaseBdev3", 00:09:55.994 "uuid": "40724a2b-40db-4342-80d8-4d7f0e27a239", 00:09:55.994 "is_configured": true, 00:09:55.994 "data_offset": 2048, 00:09:55.994 "data_size": 63488 00:09:55.994 } 00:09:55.994 ] 00:09:55.994 }' 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.994 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.254 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.254 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.254 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.254 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:56.254 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.254 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:56.254 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.254 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.254 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.254 [2024-10-05 08:46:32.700892] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.514 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.514 "name": "Existed_Raid", 00:09:56.514 "uuid": "74bef482-79e0-4c95-961e-3fc70be8e53d", 00:09:56.514 "strip_size_kb": 0, 00:09:56.514 "state": "configuring", 00:09:56.514 "raid_level": "raid1", 00:09:56.514 "superblock": true, 00:09:56.514 "num_base_bdevs": 3, 00:09:56.514 "num_base_bdevs_discovered": 1, 00:09:56.514 "num_base_bdevs_operational": 3, 00:09:56.514 "base_bdevs_list": [ 00:09:56.514 { 00:09:56.514 "name": null, 00:09:56.514 "uuid": "3dc29164-9c05-437f-9810-9b5501e5bf3e", 00:09:56.514 "is_configured": false, 00:09:56.514 "data_offset": 0, 00:09:56.514 "data_size": 63488 00:09:56.514 }, 00:09:56.514 { 00:09:56.514 "name": null, 00:09:56.514 "uuid": "88686b7b-2358-40ca-8830-5aed439ccae5", 00:09:56.514 "is_configured": false, 00:09:56.514 "data_offset": 0, 00:09:56.514 "data_size": 63488 00:09:56.514 }, 00:09:56.514 { 00:09:56.514 "name": "BaseBdev3", 00:09:56.514 "uuid": "40724a2b-40db-4342-80d8-4d7f0e27a239", 00:09:56.514 "is_configured": true, 00:09:56.514 "data_offset": 2048, 00:09:56.514 "data_size": 63488 00:09:56.514 } 00:09:56.514 ] 00:09:56.514 }' 00:09:56.515 08:46:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.515 08:46:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.084 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.084 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.084 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.084 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:57.084 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.084 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:57.084 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:57.084 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.084 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.084 [2024-10-05 08:46:33.346092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.084 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.084 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.085 "name": "Existed_Raid", 00:09:57.085 "uuid": "74bef482-79e0-4c95-961e-3fc70be8e53d", 00:09:57.085 "strip_size_kb": 0, 00:09:57.085 "state": "configuring", 00:09:57.085 "raid_level": "raid1", 00:09:57.085 "superblock": true, 00:09:57.085 "num_base_bdevs": 3, 00:09:57.085 "num_base_bdevs_discovered": 2, 00:09:57.085 "num_base_bdevs_operational": 3, 00:09:57.085 "base_bdevs_list": [ 00:09:57.085 { 00:09:57.085 "name": null, 00:09:57.085 "uuid": "3dc29164-9c05-437f-9810-9b5501e5bf3e", 00:09:57.085 "is_configured": false, 00:09:57.085 "data_offset": 0, 00:09:57.085 "data_size": 63488 00:09:57.085 }, 00:09:57.085 { 00:09:57.085 "name": "BaseBdev2", 00:09:57.085 "uuid": "88686b7b-2358-40ca-8830-5aed439ccae5", 00:09:57.085 "is_configured": true, 00:09:57.085 "data_offset": 2048, 00:09:57.085 "data_size": 63488 00:09:57.085 }, 00:09:57.085 { 00:09:57.085 "name": "BaseBdev3", 00:09:57.085 "uuid": "40724a2b-40db-4342-80d8-4d7f0e27a239", 00:09:57.085 "is_configured": true, 00:09:57.085 "data_offset": 2048, 00:09:57.085 "data_size": 63488 00:09:57.085 } 00:09:57.085 ] 00:09:57.085 }' 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.085 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.344 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.344 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.344 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.344 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3dc29164-9c05-437f-9810-9b5501e5bf3e 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.604 [2024-10-05 08:46:33.946596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:57.604 [2024-10-05 08:46:33.946888] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:57.604 [2024-10-05 08:46:33.946907] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:57.604 [2024-10-05 08:46:33.947228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:57.604 [2024-10-05 08:46:33.947393] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:57.604 [2024-10-05 08:46:33.947407] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:57.604 NewBaseBdev 00:09:57.604 [2024-10-05 08:46:33.947541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.604 [ 00:09:57.604 { 00:09:57.604 "name": "NewBaseBdev", 00:09:57.604 "aliases": [ 00:09:57.604 "3dc29164-9c05-437f-9810-9b5501e5bf3e" 00:09:57.604 ], 00:09:57.604 "product_name": "Malloc disk", 00:09:57.604 "block_size": 512, 00:09:57.604 "num_blocks": 65536, 00:09:57.604 "uuid": "3dc29164-9c05-437f-9810-9b5501e5bf3e", 00:09:57.604 "assigned_rate_limits": { 00:09:57.604 "rw_ios_per_sec": 0, 00:09:57.604 "rw_mbytes_per_sec": 0, 00:09:57.604 "r_mbytes_per_sec": 0, 00:09:57.604 "w_mbytes_per_sec": 0 00:09:57.604 }, 00:09:57.604 "claimed": true, 00:09:57.604 "claim_type": "exclusive_write", 00:09:57.604 "zoned": false, 00:09:57.604 "supported_io_types": { 00:09:57.604 "read": true, 00:09:57.604 "write": true, 00:09:57.604 "unmap": true, 00:09:57.604 "flush": true, 00:09:57.604 "reset": true, 00:09:57.604 "nvme_admin": false, 00:09:57.604 "nvme_io": false, 00:09:57.604 "nvme_io_md": false, 00:09:57.604 "write_zeroes": true, 00:09:57.604 "zcopy": true, 00:09:57.604 "get_zone_info": false, 00:09:57.604 "zone_management": false, 00:09:57.604 "zone_append": false, 00:09:57.604 "compare": false, 00:09:57.604 "compare_and_write": false, 00:09:57.604 "abort": true, 00:09:57.604 "seek_hole": false, 00:09:57.604 "seek_data": false, 00:09:57.604 "copy": true, 00:09:57.604 "nvme_iov_md": false 00:09:57.604 }, 00:09:57.604 "memory_domains": [ 00:09:57.604 { 00:09:57.604 "dma_device_id": "system", 00:09:57.604 "dma_device_type": 1 00:09:57.604 }, 00:09:57.604 { 00:09:57.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.604 "dma_device_type": 2 00:09:57.604 } 00:09:57.604 ], 00:09:57.604 "driver_specific": {} 00:09:57.604 } 00:09:57.604 ] 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.604 08:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.604 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.604 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.604 "name": "Existed_Raid", 00:09:57.604 "uuid": "74bef482-79e0-4c95-961e-3fc70be8e53d", 00:09:57.604 "strip_size_kb": 0, 00:09:57.604 "state": "online", 00:09:57.604 "raid_level": "raid1", 00:09:57.604 "superblock": true, 00:09:57.604 "num_base_bdevs": 3, 00:09:57.604 "num_base_bdevs_discovered": 3, 00:09:57.604 "num_base_bdevs_operational": 3, 00:09:57.604 "base_bdevs_list": [ 00:09:57.604 { 00:09:57.604 "name": "NewBaseBdev", 00:09:57.604 "uuid": "3dc29164-9c05-437f-9810-9b5501e5bf3e", 00:09:57.604 "is_configured": true, 00:09:57.605 "data_offset": 2048, 00:09:57.605 "data_size": 63488 00:09:57.605 }, 00:09:57.605 { 00:09:57.605 "name": "BaseBdev2", 00:09:57.605 "uuid": "88686b7b-2358-40ca-8830-5aed439ccae5", 00:09:57.605 "is_configured": true, 00:09:57.605 "data_offset": 2048, 00:09:57.605 "data_size": 63488 00:09:57.605 }, 00:09:57.605 { 00:09:57.605 "name": "BaseBdev3", 00:09:57.605 "uuid": "40724a2b-40db-4342-80d8-4d7f0e27a239", 00:09:57.605 "is_configured": true, 00:09:57.605 "data_offset": 2048, 00:09:57.605 "data_size": 63488 00:09:57.605 } 00:09:57.605 ] 00:09:57.605 }' 00:09:57.605 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.605 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.174 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:58.174 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:58.174 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.174 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.174 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.174 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.174 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:58.174 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.174 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.174 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.174 [2024-10-05 08:46:34.398259] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.174 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.174 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:58.174 "name": "Existed_Raid", 00:09:58.174 "aliases": [ 00:09:58.174 "74bef482-79e0-4c95-961e-3fc70be8e53d" 00:09:58.174 ], 00:09:58.174 "product_name": "Raid Volume", 00:09:58.174 "block_size": 512, 00:09:58.174 "num_blocks": 63488, 00:09:58.174 "uuid": "74bef482-79e0-4c95-961e-3fc70be8e53d", 00:09:58.174 "assigned_rate_limits": { 00:09:58.174 "rw_ios_per_sec": 0, 00:09:58.174 "rw_mbytes_per_sec": 0, 00:09:58.174 "r_mbytes_per_sec": 0, 00:09:58.174 "w_mbytes_per_sec": 0 00:09:58.174 }, 00:09:58.174 "claimed": false, 00:09:58.174 "zoned": false, 00:09:58.174 "supported_io_types": { 00:09:58.174 "read": true, 00:09:58.174 "write": true, 00:09:58.174 "unmap": false, 00:09:58.174 "flush": false, 00:09:58.174 "reset": true, 00:09:58.174 "nvme_admin": false, 00:09:58.174 "nvme_io": false, 00:09:58.174 "nvme_io_md": false, 00:09:58.174 "write_zeroes": true, 00:09:58.174 "zcopy": false, 00:09:58.174 "get_zone_info": false, 00:09:58.174 "zone_management": false, 00:09:58.174 "zone_append": false, 00:09:58.174 "compare": false, 00:09:58.174 "compare_and_write": false, 00:09:58.174 "abort": false, 00:09:58.174 "seek_hole": false, 00:09:58.174 "seek_data": false, 00:09:58.174 "copy": false, 00:09:58.174 "nvme_iov_md": false 00:09:58.174 }, 00:09:58.174 "memory_domains": [ 00:09:58.174 { 00:09:58.174 "dma_device_id": "system", 00:09:58.174 "dma_device_type": 1 00:09:58.174 }, 00:09:58.174 { 00:09:58.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.174 "dma_device_type": 2 00:09:58.174 }, 00:09:58.174 { 00:09:58.174 "dma_device_id": "system", 00:09:58.174 "dma_device_type": 1 00:09:58.174 }, 00:09:58.174 { 00:09:58.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.174 "dma_device_type": 2 00:09:58.174 }, 00:09:58.174 { 00:09:58.174 "dma_device_id": "system", 00:09:58.174 "dma_device_type": 1 00:09:58.174 }, 00:09:58.174 { 00:09:58.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.174 "dma_device_type": 2 00:09:58.174 } 00:09:58.174 ], 00:09:58.174 "driver_specific": { 00:09:58.174 "raid": { 00:09:58.175 "uuid": "74bef482-79e0-4c95-961e-3fc70be8e53d", 00:09:58.175 "strip_size_kb": 0, 00:09:58.175 "state": "online", 00:09:58.175 "raid_level": "raid1", 00:09:58.175 "superblock": true, 00:09:58.175 "num_base_bdevs": 3, 00:09:58.175 "num_base_bdevs_discovered": 3, 00:09:58.175 "num_base_bdevs_operational": 3, 00:09:58.175 "base_bdevs_list": [ 00:09:58.175 { 00:09:58.175 "name": "NewBaseBdev", 00:09:58.175 "uuid": "3dc29164-9c05-437f-9810-9b5501e5bf3e", 00:09:58.175 "is_configured": true, 00:09:58.175 "data_offset": 2048, 00:09:58.175 "data_size": 63488 00:09:58.175 }, 00:09:58.175 { 00:09:58.175 "name": "BaseBdev2", 00:09:58.175 "uuid": "88686b7b-2358-40ca-8830-5aed439ccae5", 00:09:58.175 "is_configured": true, 00:09:58.175 "data_offset": 2048, 00:09:58.175 "data_size": 63488 00:09:58.175 }, 00:09:58.175 { 00:09:58.175 "name": "BaseBdev3", 00:09:58.175 "uuid": "40724a2b-40db-4342-80d8-4d7f0e27a239", 00:09:58.175 "is_configured": true, 00:09:58.175 "data_offset": 2048, 00:09:58.175 "data_size": 63488 00:09:58.175 } 00:09:58.175 ] 00:09:58.175 } 00:09:58.175 } 00:09:58.175 }' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:58.175 BaseBdev2 00:09:58.175 BaseBdev3' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.175 [2024-10-05 08:46:34.625409] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.175 [2024-10-05 08:46:34.625444] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.175 [2024-10-05 08:46:34.625518] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.175 [2024-10-05 08:46:34.625824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.175 [2024-10-05 08:46:34.625835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67031 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 67031 ']' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 67031 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.175 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67031 00:09:58.435 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.435 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.435 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67031' 00:09:58.435 killing process with pid 67031 00:09:58.435 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 67031 00:09:58.435 [2024-10-05 08:46:34.677194] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.435 08:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 67031 00:09:58.698 [2024-10-05 08:46:34.998488] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:00.082 08:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:00.082 00:10:00.082 real 0m10.757s 00:10:00.082 user 0m16.750s 00:10:00.082 sys 0m1.984s 00:10:00.082 08:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.082 08:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.082 ************************************ 00:10:00.082 END TEST raid_state_function_test_sb 00:10:00.082 ************************************ 00:10:00.082 08:46:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:00.082 08:46:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:00.082 08:46:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.082 08:46:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:00.082 ************************************ 00:10:00.082 START TEST raid_superblock_test 00:10:00.082 ************************************ 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67592 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67592 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 67592 ']' 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.082 08:46:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.082 [2024-10-05 08:46:36.505368] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:00.082 [2024-10-05 08:46:36.505573] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67592 ] 00:10:00.341 [2024-10-05 08:46:36.674610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.601 [2024-10-05 08:46:36.922940] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.860 [2024-10-05 08:46:37.156872] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.860 [2024-10-05 08:46:37.157019] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.120 malloc1 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.120 [2024-10-05 08:46:37.392363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:01.120 [2024-10-05 08:46:37.392432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.120 [2024-10-05 08:46:37.392459] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:01.120 [2024-10-05 08:46:37.392480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.120 [2024-10-05 08:46:37.394889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.120 [2024-10-05 08:46:37.394928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:01.120 pt1 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.120 malloc2 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.120 [2024-10-05 08:46:37.463638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:01.120 [2024-10-05 08:46:37.463762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.120 [2024-10-05 08:46:37.463803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:01.120 [2024-10-05 08:46:37.463832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.120 [2024-10-05 08:46:37.466202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.120 [2024-10-05 08:46:37.466270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:01.120 pt2 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.120 malloc3 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.120 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.120 [2024-10-05 08:46:37.528322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:01.120 [2024-10-05 08:46:37.528424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.120 [2024-10-05 08:46:37.528461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:01.121 [2024-10-05 08:46:37.528489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.121 [2024-10-05 08:46:37.530811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.121 [2024-10-05 08:46:37.530884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:01.121 pt3 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.121 [2024-10-05 08:46:37.540381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:01.121 [2024-10-05 08:46:37.542436] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:01.121 [2024-10-05 08:46:37.542502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:01.121 [2024-10-05 08:46:37.542650] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:01.121 [2024-10-05 08:46:37.542679] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:01.121 [2024-10-05 08:46:37.542911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:01.121 [2024-10-05 08:46:37.543121] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:01.121 [2024-10-05 08:46:37.543133] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:01.121 [2024-10-05 08:46:37.543277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.121 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.381 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.381 "name": "raid_bdev1", 00:10:01.381 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:01.381 "strip_size_kb": 0, 00:10:01.381 "state": "online", 00:10:01.381 "raid_level": "raid1", 00:10:01.381 "superblock": true, 00:10:01.381 "num_base_bdevs": 3, 00:10:01.381 "num_base_bdevs_discovered": 3, 00:10:01.381 "num_base_bdevs_operational": 3, 00:10:01.381 "base_bdevs_list": [ 00:10:01.381 { 00:10:01.381 "name": "pt1", 00:10:01.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.381 "is_configured": true, 00:10:01.381 "data_offset": 2048, 00:10:01.381 "data_size": 63488 00:10:01.381 }, 00:10:01.381 { 00:10:01.381 "name": "pt2", 00:10:01.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.381 "is_configured": true, 00:10:01.381 "data_offset": 2048, 00:10:01.381 "data_size": 63488 00:10:01.381 }, 00:10:01.381 { 00:10:01.381 "name": "pt3", 00:10:01.381 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.381 "is_configured": true, 00:10:01.381 "data_offset": 2048, 00:10:01.381 "data_size": 63488 00:10:01.381 } 00:10:01.381 ] 00:10:01.381 }' 00:10:01.381 08:46:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.381 08:46:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.640 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:01.640 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:01.640 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.640 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.640 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.640 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.641 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.641 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.641 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.641 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.641 [2024-10-05 08:46:38.027804] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.641 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.641 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.641 "name": "raid_bdev1", 00:10:01.641 "aliases": [ 00:10:01.641 "7ed0c221-6c09-4835-b492-24761674d28d" 00:10:01.641 ], 00:10:01.641 "product_name": "Raid Volume", 00:10:01.641 "block_size": 512, 00:10:01.641 "num_blocks": 63488, 00:10:01.641 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:01.641 "assigned_rate_limits": { 00:10:01.641 "rw_ios_per_sec": 0, 00:10:01.641 "rw_mbytes_per_sec": 0, 00:10:01.641 "r_mbytes_per_sec": 0, 00:10:01.641 "w_mbytes_per_sec": 0 00:10:01.641 }, 00:10:01.641 "claimed": false, 00:10:01.641 "zoned": false, 00:10:01.641 "supported_io_types": { 00:10:01.641 "read": true, 00:10:01.641 "write": true, 00:10:01.641 "unmap": false, 00:10:01.641 "flush": false, 00:10:01.641 "reset": true, 00:10:01.641 "nvme_admin": false, 00:10:01.641 "nvme_io": false, 00:10:01.641 "nvme_io_md": false, 00:10:01.641 "write_zeroes": true, 00:10:01.641 "zcopy": false, 00:10:01.641 "get_zone_info": false, 00:10:01.641 "zone_management": false, 00:10:01.641 "zone_append": false, 00:10:01.641 "compare": false, 00:10:01.641 "compare_and_write": false, 00:10:01.641 "abort": false, 00:10:01.641 "seek_hole": false, 00:10:01.641 "seek_data": false, 00:10:01.641 "copy": false, 00:10:01.641 "nvme_iov_md": false 00:10:01.641 }, 00:10:01.641 "memory_domains": [ 00:10:01.641 { 00:10:01.641 "dma_device_id": "system", 00:10:01.641 "dma_device_type": 1 00:10:01.641 }, 00:10:01.641 { 00:10:01.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.641 "dma_device_type": 2 00:10:01.641 }, 00:10:01.641 { 00:10:01.641 "dma_device_id": "system", 00:10:01.641 "dma_device_type": 1 00:10:01.641 }, 00:10:01.641 { 00:10:01.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.641 "dma_device_type": 2 00:10:01.641 }, 00:10:01.641 { 00:10:01.641 "dma_device_id": "system", 00:10:01.641 "dma_device_type": 1 00:10:01.641 }, 00:10:01.641 { 00:10:01.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.641 "dma_device_type": 2 00:10:01.641 } 00:10:01.641 ], 00:10:01.641 "driver_specific": { 00:10:01.641 "raid": { 00:10:01.641 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:01.641 "strip_size_kb": 0, 00:10:01.641 "state": "online", 00:10:01.641 "raid_level": "raid1", 00:10:01.641 "superblock": true, 00:10:01.641 "num_base_bdevs": 3, 00:10:01.641 "num_base_bdevs_discovered": 3, 00:10:01.641 "num_base_bdevs_operational": 3, 00:10:01.641 "base_bdevs_list": [ 00:10:01.641 { 00:10:01.641 "name": "pt1", 00:10:01.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:01.641 "is_configured": true, 00:10:01.641 "data_offset": 2048, 00:10:01.641 "data_size": 63488 00:10:01.641 }, 00:10:01.641 { 00:10:01.641 "name": "pt2", 00:10:01.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:01.641 "is_configured": true, 00:10:01.641 "data_offset": 2048, 00:10:01.641 "data_size": 63488 00:10:01.641 }, 00:10:01.641 { 00:10:01.641 "name": "pt3", 00:10:01.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:01.641 "is_configured": true, 00:10:01.641 "data_offset": 2048, 00:10:01.641 "data_size": 63488 00:10:01.641 } 00:10:01.641 ] 00:10:01.641 } 00:10:01.641 } 00:10:01.641 }' 00:10:01.641 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.641 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:01.641 pt2 00:10:01.641 pt3' 00:10:01.641 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:01.901 [2024-10-05 08:46:38.247346] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7ed0c221-6c09-4835-b492-24761674d28d 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7ed0c221-6c09-4835-b492-24761674d28d ']' 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.901 [2024-10-05 08:46:38.279050] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:01.901 [2024-10-05 08:46:38.279119] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.901 [2024-10-05 08:46:38.279192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.901 [2024-10-05 08:46:38.279272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.901 [2024-10-05 08:46:38.279282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.901 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.902 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.161 [2024-10-05 08:46:38.430804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:02.161 [2024-10-05 08:46:38.432906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:02.161 [2024-10-05 08:46:38.433015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:02.161 [2024-10-05 08:46:38.433072] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:02.161 [2024-10-05 08:46:38.433116] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:02.161 [2024-10-05 08:46:38.433134] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:02.161 [2024-10-05 08:46:38.433150] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.161 [2024-10-05 08:46:38.433160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:02.161 request: 00:10:02.161 { 00:10:02.161 "name": "raid_bdev1", 00:10:02.161 "raid_level": "raid1", 00:10:02.161 "base_bdevs": [ 00:10:02.161 "malloc1", 00:10:02.161 "malloc2", 00:10:02.161 "malloc3" 00:10:02.161 ], 00:10:02.161 "superblock": false, 00:10:02.161 "method": "bdev_raid_create", 00:10:02.161 "req_id": 1 00:10:02.161 } 00:10:02.161 Got JSON-RPC error response 00:10:02.161 response: 00:10:02.161 { 00:10:02.161 "code": -17, 00:10:02.161 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:02.161 } 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.161 [2024-10-05 08:46:38.482682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:02.161 [2024-10-05 08:46:38.482771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.161 [2024-10-05 08:46:38.482814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:02.161 [2024-10-05 08:46:38.482842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.161 [2024-10-05 08:46:38.485251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.161 [2024-10-05 08:46:38.485316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:02.161 [2024-10-05 08:46:38.485411] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:02.161 [2024-10-05 08:46:38.485494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:02.161 pt1 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.161 "name": "raid_bdev1", 00:10:02.161 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:02.161 "strip_size_kb": 0, 00:10:02.161 "state": "configuring", 00:10:02.161 "raid_level": "raid1", 00:10:02.161 "superblock": true, 00:10:02.161 "num_base_bdevs": 3, 00:10:02.161 "num_base_bdevs_discovered": 1, 00:10:02.161 "num_base_bdevs_operational": 3, 00:10:02.161 "base_bdevs_list": [ 00:10:02.161 { 00:10:02.161 "name": "pt1", 00:10:02.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.161 "is_configured": true, 00:10:02.161 "data_offset": 2048, 00:10:02.161 "data_size": 63488 00:10:02.161 }, 00:10:02.161 { 00:10:02.161 "name": null, 00:10:02.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.161 "is_configured": false, 00:10:02.161 "data_offset": 2048, 00:10:02.161 "data_size": 63488 00:10:02.161 }, 00:10:02.161 { 00:10:02.161 "name": null, 00:10:02.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.161 "is_configured": false, 00:10:02.161 "data_offset": 2048, 00:10:02.161 "data_size": 63488 00:10:02.161 } 00:10:02.161 ] 00:10:02.161 }' 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.161 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.421 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:02.421 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:02.421 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.681 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.681 [2024-10-05 08:46:38.898022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:02.681 [2024-10-05 08:46:38.898083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.681 [2024-10-05 08:46:38.898107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:02.681 [2024-10-05 08:46:38.898117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.681 [2024-10-05 08:46:38.898538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.681 [2024-10-05 08:46:38.898559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:02.681 [2024-10-05 08:46:38.898638] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:02.681 [2024-10-05 08:46:38.898659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.681 pt2 00:10:02.681 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.681 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:02.681 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.681 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.681 [2024-10-05 08:46:38.906026] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:02.681 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.681 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:02.681 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.681 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.681 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.681 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.682 "name": "raid_bdev1", 00:10:02.682 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:02.682 "strip_size_kb": 0, 00:10:02.682 "state": "configuring", 00:10:02.682 "raid_level": "raid1", 00:10:02.682 "superblock": true, 00:10:02.682 "num_base_bdevs": 3, 00:10:02.682 "num_base_bdevs_discovered": 1, 00:10:02.682 "num_base_bdevs_operational": 3, 00:10:02.682 "base_bdevs_list": [ 00:10:02.682 { 00:10:02.682 "name": "pt1", 00:10:02.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.682 "is_configured": true, 00:10:02.682 "data_offset": 2048, 00:10:02.682 "data_size": 63488 00:10:02.682 }, 00:10:02.682 { 00:10:02.682 "name": null, 00:10:02.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.682 "is_configured": false, 00:10:02.682 "data_offset": 0, 00:10:02.682 "data_size": 63488 00:10:02.682 }, 00:10:02.682 { 00:10:02.682 "name": null, 00:10:02.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.682 "is_configured": false, 00:10:02.682 "data_offset": 2048, 00:10:02.682 "data_size": 63488 00:10:02.682 } 00:10:02.682 ] 00:10:02.682 }' 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.682 08:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.942 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:02.942 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.942 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:02.942 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.942 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.942 [2024-10-05 08:46:39.349210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:02.942 [2024-10-05 08:46:39.349329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.942 [2024-10-05 08:46:39.349362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:02.942 [2024-10-05 08:46:39.349392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.942 [2024-10-05 08:46:39.349852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.942 [2024-10-05 08:46:39.349910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:02.942 [2024-10-05 08:46:39.350040] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:02.942 [2024-10-05 08:46:39.350099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.942 pt2 00:10:02.942 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.942 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:02.942 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.942 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:02.942 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.942 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.942 [2024-10-05 08:46:39.361223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:02.942 [2024-10-05 08:46:39.361320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.942 [2024-10-05 08:46:39.361359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:02.942 [2024-10-05 08:46:39.361392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.942 [2024-10-05 08:46:39.361752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.942 [2024-10-05 08:46:39.361810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:02.942 [2024-10-05 08:46:39.361889] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:02.942 [2024-10-05 08:46:39.361932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:02.942 [2024-10-05 08:46:39.362082] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:02.942 [2024-10-05 08:46:39.362120] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:02.942 [2024-10-05 08:46:39.362376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:02.942 [2024-10-05 08:46:39.362562] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:02.942 [2024-10-05 08:46:39.362600] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:02.943 [2024-10-05 08:46:39.362782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.943 pt3 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.943 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.203 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.203 "name": "raid_bdev1", 00:10:03.203 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:03.203 "strip_size_kb": 0, 00:10:03.203 "state": "online", 00:10:03.203 "raid_level": "raid1", 00:10:03.203 "superblock": true, 00:10:03.203 "num_base_bdevs": 3, 00:10:03.203 "num_base_bdevs_discovered": 3, 00:10:03.203 "num_base_bdevs_operational": 3, 00:10:03.203 "base_bdevs_list": [ 00:10:03.203 { 00:10:03.203 "name": "pt1", 00:10:03.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.203 "is_configured": true, 00:10:03.203 "data_offset": 2048, 00:10:03.203 "data_size": 63488 00:10:03.203 }, 00:10:03.203 { 00:10:03.203 "name": "pt2", 00:10:03.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.203 "is_configured": true, 00:10:03.203 "data_offset": 2048, 00:10:03.203 "data_size": 63488 00:10:03.203 }, 00:10:03.203 { 00:10:03.203 "name": "pt3", 00:10:03.203 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.203 "is_configured": true, 00:10:03.203 "data_offset": 2048, 00:10:03.203 "data_size": 63488 00:10:03.203 } 00:10:03.203 ] 00:10:03.203 }' 00:10:03.203 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.203 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.463 [2024-10-05 08:46:39.828757] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.463 "name": "raid_bdev1", 00:10:03.463 "aliases": [ 00:10:03.463 "7ed0c221-6c09-4835-b492-24761674d28d" 00:10:03.463 ], 00:10:03.463 "product_name": "Raid Volume", 00:10:03.463 "block_size": 512, 00:10:03.463 "num_blocks": 63488, 00:10:03.463 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:03.463 "assigned_rate_limits": { 00:10:03.463 "rw_ios_per_sec": 0, 00:10:03.463 "rw_mbytes_per_sec": 0, 00:10:03.463 "r_mbytes_per_sec": 0, 00:10:03.463 "w_mbytes_per_sec": 0 00:10:03.463 }, 00:10:03.463 "claimed": false, 00:10:03.463 "zoned": false, 00:10:03.463 "supported_io_types": { 00:10:03.463 "read": true, 00:10:03.463 "write": true, 00:10:03.463 "unmap": false, 00:10:03.463 "flush": false, 00:10:03.463 "reset": true, 00:10:03.463 "nvme_admin": false, 00:10:03.463 "nvme_io": false, 00:10:03.463 "nvme_io_md": false, 00:10:03.463 "write_zeroes": true, 00:10:03.463 "zcopy": false, 00:10:03.463 "get_zone_info": false, 00:10:03.463 "zone_management": false, 00:10:03.463 "zone_append": false, 00:10:03.463 "compare": false, 00:10:03.463 "compare_and_write": false, 00:10:03.463 "abort": false, 00:10:03.463 "seek_hole": false, 00:10:03.463 "seek_data": false, 00:10:03.463 "copy": false, 00:10:03.463 "nvme_iov_md": false 00:10:03.463 }, 00:10:03.463 "memory_domains": [ 00:10:03.463 { 00:10:03.463 "dma_device_id": "system", 00:10:03.463 "dma_device_type": 1 00:10:03.463 }, 00:10:03.463 { 00:10:03.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.463 "dma_device_type": 2 00:10:03.463 }, 00:10:03.463 { 00:10:03.463 "dma_device_id": "system", 00:10:03.463 "dma_device_type": 1 00:10:03.463 }, 00:10:03.463 { 00:10:03.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.463 "dma_device_type": 2 00:10:03.463 }, 00:10:03.463 { 00:10:03.463 "dma_device_id": "system", 00:10:03.463 "dma_device_type": 1 00:10:03.463 }, 00:10:03.463 { 00:10:03.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.463 "dma_device_type": 2 00:10:03.463 } 00:10:03.463 ], 00:10:03.463 "driver_specific": { 00:10:03.463 "raid": { 00:10:03.463 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:03.463 "strip_size_kb": 0, 00:10:03.463 "state": "online", 00:10:03.463 "raid_level": "raid1", 00:10:03.463 "superblock": true, 00:10:03.463 "num_base_bdevs": 3, 00:10:03.463 "num_base_bdevs_discovered": 3, 00:10:03.463 "num_base_bdevs_operational": 3, 00:10:03.463 "base_bdevs_list": [ 00:10:03.463 { 00:10:03.463 "name": "pt1", 00:10:03.463 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.463 "is_configured": true, 00:10:03.463 "data_offset": 2048, 00:10:03.463 "data_size": 63488 00:10:03.463 }, 00:10:03.463 { 00:10:03.463 "name": "pt2", 00:10:03.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.463 "is_configured": true, 00:10:03.463 "data_offset": 2048, 00:10:03.463 "data_size": 63488 00:10:03.463 }, 00:10:03.463 { 00:10:03.463 "name": "pt3", 00:10:03.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.463 "is_configured": true, 00:10:03.463 "data_offset": 2048, 00:10:03.463 "data_size": 63488 00:10:03.463 } 00:10:03.463 ] 00:10:03.463 } 00:10:03.463 } 00:10:03.463 }' 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:03.463 pt2 00:10:03.463 pt3' 00:10:03.463 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.723 08:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.723 [2024-10-05 08:46:40.092251] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7ed0c221-6c09-4835-b492-24761674d28d '!=' 7ed0c221-6c09-4835-b492-24761674d28d ']' 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.723 [2024-10-05 08:46:40.139964] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:03.723 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.982 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.982 "name": "raid_bdev1", 00:10:03.982 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:03.982 "strip_size_kb": 0, 00:10:03.982 "state": "online", 00:10:03.982 "raid_level": "raid1", 00:10:03.982 "superblock": true, 00:10:03.982 "num_base_bdevs": 3, 00:10:03.982 "num_base_bdevs_discovered": 2, 00:10:03.982 "num_base_bdevs_operational": 2, 00:10:03.982 "base_bdevs_list": [ 00:10:03.982 { 00:10:03.982 "name": null, 00:10:03.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.982 "is_configured": false, 00:10:03.982 "data_offset": 0, 00:10:03.982 "data_size": 63488 00:10:03.982 }, 00:10:03.982 { 00:10:03.982 "name": "pt2", 00:10:03.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.982 "is_configured": true, 00:10:03.983 "data_offset": 2048, 00:10:03.983 "data_size": 63488 00:10:03.983 }, 00:10:03.983 { 00:10:03.983 "name": "pt3", 00:10:03.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:03.983 "is_configured": true, 00:10:03.983 "data_offset": 2048, 00:10:03.983 "data_size": 63488 00:10:03.983 } 00:10:03.983 ] 00:10:03.983 }' 00:10:03.983 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.983 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.243 [2024-10-05 08:46:40.527280] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:04.243 [2024-10-05 08:46:40.527390] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.243 [2024-10-05 08:46:40.527496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.243 [2024-10-05 08:46:40.527578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.243 [2024-10-05 08:46:40.527640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.243 [2024-10-05 08:46:40.595115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.243 [2024-10-05 08:46:40.595168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.243 [2024-10-05 08:46:40.595185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:04.243 [2024-10-05 08:46:40.595196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.243 [2024-10-05 08:46:40.597718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.243 [2024-10-05 08:46:40.597794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.243 [2024-10-05 08:46:40.597882] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:04.243 [2024-10-05 08:46:40.597939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.243 pt2 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.243 "name": "raid_bdev1", 00:10:04.243 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:04.243 "strip_size_kb": 0, 00:10:04.243 "state": "configuring", 00:10:04.243 "raid_level": "raid1", 00:10:04.243 "superblock": true, 00:10:04.243 "num_base_bdevs": 3, 00:10:04.243 "num_base_bdevs_discovered": 1, 00:10:04.243 "num_base_bdevs_operational": 2, 00:10:04.243 "base_bdevs_list": [ 00:10:04.243 { 00:10:04.243 "name": null, 00:10:04.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.243 "is_configured": false, 00:10:04.243 "data_offset": 2048, 00:10:04.243 "data_size": 63488 00:10:04.243 }, 00:10:04.243 { 00:10:04.243 "name": "pt2", 00:10:04.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.243 "is_configured": true, 00:10:04.243 "data_offset": 2048, 00:10:04.243 "data_size": 63488 00:10:04.243 }, 00:10:04.243 { 00:10:04.243 "name": null, 00:10:04.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.243 "is_configured": false, 00:10:04.243 "data_offset": 2048, 00:10:04.243 "data_size": 63488 00:10:04.243 } 00:10:04.243 ] 00:10:04.243 }' 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.243 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.812 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:04.812 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:04.812 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:04.812 08:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:04.812 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.812 08:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.812 [2024-10-05 08:46:40.998450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:04.812 [2024-10-05 08:46:40.998562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.813 [2024-10-05 08:46:40.998599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:04.813 [2024-10-05 08:46:40.998632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.813 [2024-10-05 08:46:40.999149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.813 [2024-10-05 08:46:40.999212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:04.813 [2024-10-05 08:46:40.999321] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:04.813 [2024-10-05 08:46:40.999380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:04.813 [2024-10-05 08:46:40.999535] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:04.813 [2024-10-05 08:46:40.999573] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:04.813 [2024-10-05 08:46:40.999849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:04.813 [2024-10-05 08:46:41.000054] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:04.813 [2024-10-05 08:46:41.000094] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:04.813 [2024-10-05 08:46:41.000270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.813 pt3 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.813 "name": "raid_bdev1", 00:10:04.813 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:04.813 "strip_size_kb": 0, 00:10:04.813 "state": "online", 00:10:04.813 "raid_level": "raid1", 00:10:04.813 "superblock": true, 00:10:04.813 "num_base_bdevs": 3, 00:10:04.813 "num_base_bdevs_discovered": 2, 00:10:04.813 "num_base_bdevs_operational": 2, 00:10:04.813 "base_bdevs_list": [ 00:10:04.813 { 00:10:04.813 "name": null, 00:10:04.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.813 "is_configured": false, 00:10:04.813 "data_offset": 2048, 00:10:04.813 "data_size": 63488 00:10:04.813 }, 00:10:04.813 { 00:10:04.813 "name": "pt2", 00:10:04.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.813 "is_configured": true, 00:10:04.813 "data_offset": 2048, 00:10:04.813 "data_size": 63488 00:10:04.813 }, 00:10:04.813 { 00:10:04.813 "name": "pt3", 00:10:04.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.813 "is_configured": true, 00:10:04.813 "data_offset": 2048, 00:10:04.813 "data_size": 63488 00:10:04.813 } 00:10:04.813 ] 00:10:04.813 }' 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.813 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.073 [2024-10-05 08:46:41.433702] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:05.073 [2024-10-05 08:46:41.433738] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.073 [2024-10-05 08:46:41.433820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.073 [2024-10-05 08:46:41.433886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.073 [2024-10-05 08:46:41.433896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.073 [2024-10-05 08:46:41.489595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:05.073 [2024-10-05 08:46:41.489653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.073 [2024-10-05 08:46:41.489675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:05.073 [2024-10-05 08:46:41.489685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.073 [2024-10-05 08:46:41.492216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.073 [2024-10-05 08:46:41.492248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:05.073 [2024-10-05 08:46:41.492323] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:05.073 [2024-10-05 08:46:41.492373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:05.073 [2024-10-05 08:46:41.492489] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:05.073 [2024-10-05 08:46:41.492501] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:05.073 [2024-10-05 08:46:41.492518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:05.073 [2024-10-05 08:46:41.492572] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:05.073 pt1 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.073 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.333 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.333 "name": "raid_bdev1", 00:10:05.333 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:05.333 "strip_size_kb": 0, 00:10:05.333 "state": "configuring", 00:10:05.333 "raid_level": "raid1", 00:10:05.333 "superblock": true, 00:10:05.333 "num_base_bdevs": 3, 00:10:05.333 "num_base_bdevs_discovered": 1, 00:10:05.333 "num_base_bdevs_operational": 2, 00:10:05.333 "base_bdevs_list": [ 00:10:05.333 { 00:10:05.333 "name": null, 00:10:05.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.333 "is_configured": false, 00:10:05.333 "data_offset": 2048, 00:10:05.333 "data_size": 63488 00:10:05.333 }, 00:10:05.333 { 00:10:05.333 "name": "pt2", 00:10:05.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.333 "is_configured": true, 00:10:05.333 "data_offset": 2048, 00:10:05.333 "data_size": 63488 00:10:05.333 }, 00:10:05.333 { 00:10:05.333 "name": null, 00:10:05.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.333 "is_configured": false, 00:10:05.333 "data_offset": 2048, 00:10:05.333 "data_size": 63488 00:10:05.333 } 00:10:05.333 ] 00:10:05.333 }' 00:10:05.333 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.333 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.595 [2024-10-05 08:46:41.980827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:05.595 [2024-10-05 08:46:41.980929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.595 [2024-10-05 08:46:41.980977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:05.595 [2024-10-05 08:46:41.981009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.595 [2024-10-05 08:46:41.981492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.595 [2024-10-05 08:46:41.981552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:05.595 [2024-10-05 08:46:41.981659] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:05.595 [2024-10-05 08:46:41.981736] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:05.595 [2024-10-05 08:46:41.981912] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:05.595 [2024-10-05 08:46:41.981947] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:05.595 [2024-10-05 08:46:41.982272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:05.595 [2024-10-05 08:46:41.982465] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:05.595 [2024-10-05 08:46:41.982511] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:05.595 [2024-10-05 08:46:41.982696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.595 pt3 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.595 08:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.595 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.595 08:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.595 "name": "raid_bdev1", 00:10:05.595 "uuid": "7ed0c221-6c09-4835-b492-24761674d28d", 00:10:05.595 "strip_size_kb": 0, 00:10:05.595 "state": "online", 00:10:05.595 "raid_level": "raid1", 00:10:05.595 "superblock": true, 00:10:05.595 "num_base_bdevs": 3, 00:10:05.595 "num_base_bdevs_discovered": 2, 00:10:05.595 "num_base_bdevs_operational": 2, 00:10:05.595 "base_bdevs_list": [ 00:10:05.595 { 00:10:05.595 "name": null, 00:10:05.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.595 "is_configured": false, 00:10:05.595 "data_offset": 2048, 00:10:05.595 "data_size": 63488 00:10:05.595 }, 00:10:05.595 { 00:10:05.595 "name": "pt2", 00:10:05.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.595 "is_configured": true, 00:10:05.595 "data_offset": 2048, 00:10:05.595 "data_size": 63488 00:10:05.595 }, 00:10:05.595 { 00:10:05.595 "name": "pt3", 00:10:05.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.595 "is_configured": true, 00:10:05.595 "data_offset": 2048, 00:10:05.595 "data_size": 63488 00:10:05.595 } 00:10:05.595 ] 00:10:05.595 }' 00:10:05.595 08:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.595 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.166 [2024-10-05 08:46:42.480254] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7ed0c221-6c09-4835-b492-24761674d28d '!=' 7ed0c221-6c09-4835-b492-24761674d28d ']' 00:10:06.166 08:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67592 00:10:06.167 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 67592 ']' 00:10:06.167 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 67592 00:10:06.167 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:06.167 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.167 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67592 00:10:06.167 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.167 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.167 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67592' 00:10:06.167 killing process with pid 67592 00:10:06.167 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 67592 00:10:06.167 [2024-10-05 08:46:42.561376] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.167 [2024-10-05 08:46:42.561470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.167 [2024-10-05 08:46:42.561534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.167 [2024-10-05 08:46:42.561546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:06.167 08:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 67592 00:10:06.552 [2024-10-05 08:46:42.879703] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.932 08:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:07.932 00:10:07.932 real 0m7.797s 00:10:07.932 user 0m11.923s 00:10:07.932 sys 0m1.451s 00:10:07.932 08:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.932 ************************************ 00:10:07.932 END TEST raid_superblock_test 00:10:07.932 ************************************ 00:10:07.932 08:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.932 08:46:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:07.932 08:46:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:07.932 08:46:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.932 08:46:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.932 ************************************ 00:10:07.932 START TEST raid_read_error_test 00:10:07.932 ************************************ 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.J6YzesErIY 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67990 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67990 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67990 ']' 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.932 08:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.932 [2024-10-05 08:46:44.389893] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:07.932 [2024-10-05 08:46:44.390110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67990 ] 00:10:08.191 [2024-10-05 08:46:44.555252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.449 [2024-10-05 08:46:44.801525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.710 [2024-10-05 08:46:45.030255] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.710 [2024-10-05 08:46:45.030289] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.970 BaseBdev1_malloc 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.970 true 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.970 [2024-10-05 08:46:45.275535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:08.970 [2024-10-05 08:46:45.275599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.970 [2024-10-05 08:46:45.275616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:08.970 [2024-10-05 08:46:45.275627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.970 [2024-10-05 08:46:45.277880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.970 [2024-10-05 08:46:45.278032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:08.970 BaseBdev1 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.970 BaseBdev2_malloc 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.970 true 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.970 [2024-10-05 08:46:45.375407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:08.970 [2024-10-05 08:46:45.375525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.970 [2024-10-05 08:46:45.375544] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:08.970 [2024-10-05 08:46:45.375555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.970 [2024-10-05 08:46:45.377785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.970 [2024-10-05 08:46:45.377825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:08.970 BaseBdev2 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.970 BaseBdev3_malloc 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.970 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:08.971 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.971 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.230 true 00:10:09.230 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.230 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:09.230 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.230 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.230 [2024-10-05 08:46:45.449544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:09.230 [2024-10-05 08:46:45.449664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.230 [2024-10-05 08:46:45.449700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:09.230 [2024-10-05 08:46:45.449711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.230 [2024-10-05 08:46:45.452024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.230 [2024-10-05 08:46:45.452061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:09.230 BaseBdev3 00:10:09.230 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.230 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:09.230 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.230 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.230 [2024-10-05 08:46:45.461600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.230 [2024-10-05 08:46:45.463554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.230 [2024-10-05 08:46:45.463629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.230 [2024-10-05 08:46:45.463825] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:09.230 [2024-10-05 08:46:45.463838] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:09.230 [2024-10-05 08:46:45.464082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:09.230 [2024-10-05 08:46:45.464249] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:09.230 [2024-10-05 08:46:45.464268] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:09.230 [2024-10-05 08:46:45.464400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.230 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.230 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.231 "name": "raid_bdev1", 00:10:09.231 "uuid": "c649d6a9-6bf2-4806-ac05-96aedc015c5e", 00:10:09.231 "strip_size_kb": 0, 00:10:09.231 "state": "online", 00:10:09.231 "raid_level": "raid1", 00:10:09.231 "superblock": true, 00:10:09.231 "num_base_bdevs": 3, 00:10:09.231 "num_base_bdevs_discovered": 3, 00:10:09.231 "num_base_bdevs_operational": 3, 00:10:09.231 "base_bdevs_list": [ 00:10:09.231 { 00:10:09.231 "name": "BaseBdev1", 00:10:09.231 "uuid": "5288ca91-19de-5d27-967c-5bc958ad721b", 00:10:09.231 "is_configured": true, 00:10:09.231 "data_offset": 2048, 00:10:09.231 "data_size": 63488 00:10:09.231 }, 00:10:09.231 { 00:10:09.231 "name": "BaseBdev2", 00:10:09.231 "uuid": "ab397e1b-c911-50ce-8ed3-7cdebca30453", 00:10:09.231 "is_configured": true, 00:10:09.231 "data_offset": 2048, 00:10:09.231 "data_size": 63488 00:10:09.231 }, 00:10:09.231 { 00:10:09.231 "name": "BaseBdev3", 00:10:09.231 "uuid": "6c27340e-95b3-53b9-8d48-3c208916a8fc", 00:10:09.231 "is_configured": true, 00:10:09.231 "data_offset": 2048, 00:10:09.231 "data_size": 63488 00:10:09.231 } 00:10:09.231 ] 00:10:09.231 }' 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.231 08:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.490 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:09.490 08:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:09.749 [2024-10-05 08:46:45.986055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.689 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.689 "name": "raid_bdev1", 00:10:10.689 "uuid": "c649d6a9-6bf2-4806-ac05-96aedc015c5e", 00:10:10.689 "strip_size_kb": 0, 00:10:10.689 "state": "online", 00:10:10.689 "raid_level": "raid1", 00:10:10.689 "superblock": true, 00:10:10.689 "num_base_bdevs": 3, 00:10:10.689 "num_base_bdevs_discovered": 3, 00:10:10.689 "num_base_bdevs_operational": 3, 00:10:10.689 "base_bdevs_list": [ 00:10:10.689 { 00:10:10.689 "name": "BaseBdev1", 00:10:10.689 "uuid": "5288ca91-19de-5d27-967c-5bc958ad721b", 00:10:10.689 "is_configured": true, 00:10:10.689 "data_offset": 2048, 00:10:10.689 "data_size": 63488 00:10:10.689 }, 00:10:10.689 { 00:10:10.690 "name": "BaseBdev2", 00:10:10.690 "uuid": "ab397e1b-c911-50ce-8ed3-7cdebca30453", 00:10:10.690 "is_configured": true, 00:10:10.690 "data_offset": 2048, 00:10:10.690 "data_size": 63488 00:10:10.690 }, 00:10:10.690 { 00:10:10.690 "name": "BaseBdev3", 00:10:10.690 "uuid": "6c27340e-95b3-53b9-8d48-3c208916a8fc", 00:10:10.690 "is_configured": true, 00:10:10.690 "data_offset": 2048, 00:10:10.690 "data_size": 63488 00:10:10.690 } 00:10:10.690 ] 00:10:10.690 }' 00:10:10.690 08:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.690 08:46:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.950 [2024-10-05 08:46:47.366575] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.950 [2024-10-05 08:46:47.366616] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.950 [2024-10-05 08:46:47.369159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.950 [2024-10-05 08:46:47.369296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.950 [2024-10-05 08:46:47.369414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.950 [2024-10-05 08:46:47.369438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:10.950 { 00:10:10.950 "results": [ 00:10:10.950 { 00:10:10.950 "job": "raid_bdev1", 00:10:10.950 "core_mask": "0x1", 00:10:10.950 "workload": "randrw", 00:10:10.950 "percentage": 50, 00:10:10.950 "status": "finished", 00:10:10.950 "queue_depth": 1, 00:10:10.950 "io_size": 131072, 00:10:10.950 "runtime": 1.381148, 00:10:10.950 "iops": 10459.414921500085, 00:10:10.950 "mibps": 1307.4268651875107, 00:10:10.950 "io_failed": 0, 00:10:10.950 "io_timeout": 0, 00:10:10.950 "avg_latency_us": 93.14811455642366, 00:10:10.950 "min_latency_us": 21.575545851528386, 00:10:10.950 "max_latency_us": 1445.2262008733624 00:10:10.950 } 00:10:10.950 ], 00:10:10.950 "core_count": 1 00:10:10.950 } 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67990 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67990 ']' 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67990 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67990 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.950 killing process with pid 67990 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67990' 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67990 00:10:10.950 [2024-10-05 08:46:47.407671] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.950 08:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67990 00:10:11.210 [2024-10-05 08:46:47.660650] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.588 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.J6YzesErIY 00:10:12.588 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:12.588 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:12.848 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:12.848 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:12.848 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.848 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:12.848 ************************************ 00:10:12.848 END TEST raid_read_error_test 00:10:12.848 ************************************ 00:10:12.848 08:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:12.848 00:10:12.848 real 0m4.783s 00:10:12.848 user 0m5.490s 00:10:12.848 sys 0m0.677s 00:10:12.848 08:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.848 08:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.848 08:46:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:12.848 08:46:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:12.848 08:46:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.848 08:46:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.848 ************************************ 00:10:12.848 START TEST raid_write_error_test 00:10:12.848 ************************************ 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tqWQdgUz3T 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68106 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68106 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 68106 ']' 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.848 08:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.848 [2024-10-05 08:46:49.247016] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:12.848 [2024-10-05 08:46:49.247202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68106 ] 00:10:13.108 [2024-10-05 08:46:49.396951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.368 [2024-10-05 08:46:49.647667] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.627 [2024-10-05 08:46:49.880455] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.627 [2024-10-05 08:46:49.880587] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.627 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.627 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:13.627 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.627 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:13.627 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.627 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.886 BaseBdev1_malloc 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.886 true 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.886 [2024-10-05 08:46:50.136186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:13.886 [2024-10-05 08:46:50.136246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.886 [2024-10-05 08:46:50.136264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:13.886 [2024-10-05 08:46:50.136275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.886 [2024-10-05 08:46:50.138663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.886 [2024-10-05 08:46:50.138699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:13.886 BaseBdev1 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.886 BaseBdev2_malloc 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.886 true 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.886 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.886 [2024-10-05 08:46:50.229239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:13.886 [2024-10-05 08:46:50.229354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.887 [2024-10-05 08:46:50.229376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:13.887 [2024-10-05 08:46:50.229388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.887 [2024-10-05 08:46:50.231754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.887 [2024-10-05 08:46:50.231794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:13.887 BaseBdev2 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.887 BaseBdev3_malloc 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.887 true 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.887 [2024-10-05 08:46:50.290785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:13.887 [2024-10-05 08:46:50.290901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.887 [2024-10-05 08:46:50.290921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:13.887 [2024-10-05 08:46:50.290932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.887 [2024-10-05 08:46:50.293296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.887 [2024-10-05 08:46:50.293336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:13.887 BaseBdev3 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.887 [2024-10-05 08:46:50.298840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.887 [2024-10-05 08:46:50.300903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.887 [2024-10-05 08:46:50.300995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.887 [2024-10-05 08:46:50.301201] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:13.887 [2024-10-05 08:46:50.301263] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.887 [2024-10-05 08:46:50.301512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:13.887 [2024-10-05 08:46:50.301688] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:13.887 [2024-10-05 08:46:50.301703] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:13.887 [2024-10-05 08:46:50.301853] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.887 "name": "raid_bdev1", 00:10:13.887 "uuid": "10cec75a-f8f1-4319-9564-3608fe482a79", 00:10:13.887 "strip_size_kb": 0, 00:10:13.887 "state": "online", 00:10:13.887 "raid_level": "raid1", 00:10:13.887 "superblock": true, 00:10:13.887 "num_base_bdevs": 3, 00:10:13.887 "num_base_bdevs_discovered": 3, 00:10:13.887 "num_base_bdevs_operational": 3, 00:10:13.887 "base_bdevs_list": [ 00:10:13.887 { 00:10:13.887 "name": "BaseBdev1", 00:10:13.887 "uuid": "f443b909-92cd-5498-b36c-9f6cc3b5ca02", 00:10:13.887 "is_configured": true, 00:10:13.887 "data_offset": 2048, 00:10:13.887 "data_size": 63488 00:10:13.887 }, 00:10:13.887 { 00:10:13.887 "name": "BaseBdev2", 00:10:13.887 "uuid": "22fb1419-acd0-5311-bf8e-33d6ce3c4cde", 00:10:13.887 "is_configured": true, 00:10:13.887 "data_offset": 2048, 00:10:13.887 "data_size": 63488 00:10:13.887 }, 00:10:13.887 { 00:10:13.887 "name": "BaseBdev3", 00:10:13.887 "uuid": "aaa66b5c-7e86-5a82-afb3-4235d2112a76", 00:10:13.887 "is_configured": true, 00:10:13.887 "data_offset": 2048, 00:10:13.887 "data_size": 63488 00:10:13.887 } 00:10:13.887 ] 00:10:13.887 }' 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.887 08:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.456 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:14.456 08:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:14.456 [2024-10-05 08:46:50.839210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.394 [2024-10-05 08:46:51.758119] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:15.394 [2024-10-05 08:46:51.758280] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.394 [2024-10-05 08:46:51.758543] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.394 "name": "raid_bdev1", 00:10:15.394 "uuid": "10cec75a-f8f1-4319-9564-3608fe482a79", 00:10:15.394 "strip_size_kb": 0, 00:10:15.394 "state": "online", 00:10:15.394 "raid_level": "raid1", 00:10:15.394 "superblock": true, 00:10:15.394 "num_base_bdevs": 3, 00:10:15.394 "num_base_bdevs_discovered": 2, 00:10:15.394 "num_base_bdevs_operational": 2, 00:10:15.394 "base_bdevs_list": [ 00:10:15.394 { 00:10:15.394 "name": null, 00:10:15.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.394 "is_configured": false, 00:10:15.394 "data_offset": 0, 00:10:15.394 "data_size": 63488 00:10:15.394 }, 00:10:15.394 { 00:10:15.394 "name": "BaseBdev2", 00:10:15.394 "uuid": "22fb1419-acd0-5311-bf8e-33d6ce3c4cde", 00:10:15.394 "is_configured": true, 00:10:15.394 "data_offset": 2048, 00:10:15.394 "data_size": 63488 00:10:15.394 }, 00:10:15.394 { 00:10:15.394 "name": "BaseBdev3", 00:10:15.394 "uuid": "aaa66b5c-7e86-5a82-afb3-4235d2112a76", 00:10:15.394 "is_configured": true, 00:10:15.394 "data_offset": 2048, 00:10:15.394 "data_size": 63488 00:10:15.394 } 00:10:15.394 ] 00:10:15.394 }' 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.394 08:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.964 [2024-10-05 08:46:52.241551] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.964 [2024-10-05 08:46:52.241677] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.964 [2024-10-05 08:46:52.244408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.964 [2024-10-05 08:46:52.244507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.964 [2024-10-05 08:46:52.244646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.964 [2024-10-05 08:46:52.244697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:15.964 { 00:10:15.964 "results": [ 00:10:15.964 { 00:10:15.964 "job": "raid_bdev1", 00:10:15.964 "core_mask": "0x1", 00:10:15.964 "workload": "randrw", 00:10:15.964 "percentage": 50, 00:10:15.964 "status": "finished", 00:10:15.964 "queue_depth": 1, 00:10:15.964 "io_size": 131072, 00:10:15.964 "runtime": 1.403032, 00:10:15.964 "iops": 11768.797860633256, 00:10:15.964 "mibps": 1471.099732579157, 00:10:15.964 "io_failed": 0, 00:10:15.964 "io_timeout": 0, 00:10:15.964 "avg_latency_us": 82.4056125385058, 00:10:15.964 "min_latency_us": 22.358078602620086, 00:10:15.964 "max_latency_us": 1480.9991266375546 00:10:15.964 } 00:10:15.964 ], 00:10:15.964 "core_count": 1 00:10:15.964 } 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68106 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 68106 ']' 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 68106 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68106 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.964 killing process with pid 68106 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68106' 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 68106 00:10:15.964 [2024-10-05 08:46:52.291063] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.964 08:46:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 68106 00:10:16.224 [2024-10-05 08:46:52.537641] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.605 08:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:17.605 08:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tqWQdgUz3T 00:10:17.605 08:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:17.605 08:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:17.605 08:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:17.605 08:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.605 08:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:17.605 08:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:17.605 ************************************ 00:10:17.605 END TEST raid_write_error_test 00:10:17.605 ************************************ 00:10:17.605 00:10:17.605 real 0m4.802s 00:10:17.605 user 0m5.523s 00:10:17.605 sys 0m0.705s 00:10:17.605 08:46:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.605 08:46:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.605 08:46:54 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:17.605 08:46:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:17.605 08:46:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:17.605 08:46:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:17.605 08:46:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.605 08:46:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.605 ************************************ 00:10:17.605 START TEST raid_state_function_test 00:10:17.605 ************************************ 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=68225 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68225' 00:10:17.605 Process raid pid: 68225 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 68225 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 68225 ']' 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:17.605 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.864 [2024-10-05 08:46:54.121197] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:17.864 [2024-10-05 08:46:54.121396] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.864 [2024-10-05 08:46:54.283598] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.122 [2024-10-05 08:46:54.541799] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.381 [2024-10-05 08:46:54.773308] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.381 [2024-10-05 08:46:54.773436] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.640 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.641 [2024-10-05 08:46:54.952125] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.641 [2024-10-05 08:46:54.952186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.641 [2024-10-05 08:46:54.952195] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.641 [2024-10-05 08:46:54.952207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.641 [2024-10-05 08:46:54.952213] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:18.641 [2024-10-05 08:46:54.952222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:18.641 [2024-10-05 08:46:54.952227] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:18.641 [2024-10-05 08:46:54.952237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.641 08:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.641 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.641 "name": "Existed_Raid", 00:10:18.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.641 "strip_size_kb": 64, 00:10:18.641 "state": "configuring", 00:10:18.641 "raid_level": "raid0", 00:10:18.641 "superblock": false, 00:10:18.641 "num_base_bdevs": 4, 00:10:18.641 "num_base_bdevs_discovered": 0, 00:10:18.641 "num_base_bdevs_operational": 4, 00:10:18.641 "base_bdevs_list": [ 00:10:18.641 { 00:10:18.641 "name": "BaseBdev1", 00:10:18.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.641 "is_configured": false, 00:10:18.641 "data_offset": 0, 00:10:18.641 "data_size": 0 00:10:18.641 }, 00:10:18.641 { 00:10:18.641 "name": "BaseBdev2", 00:10:18.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.641 "is_configured": false, 00:10:18.641 "data_offset": 0, 00:10:18.641 "data_size": 0 00:10:18.641 }, 00:10:18.641 { 00:10:18.641 "name": "BaseBdev3", 00:10:18.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.641 "is_configured": false, 00:10:18.641 "data_offset": 0, 00:10:18.641 "data_size": 0 00:10:18.641 }, 00:10:18.641 { 00:10:18.641 "name": "BaseBdev4", 00:10:18.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.641 "is_configured": false, 00:10:18.641 "data_offset": 0, 00:10:18.641 "data_size": 0 00:10:18.641 } 00:10:18.641 ] 00:10:18.641 }' 00:10:18.641 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.641 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.900 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:18.900 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.900 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.900 [2024-10-05 08:46:55.371276] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.900 [2024-10-05 08:46:55.371361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.160 [2024-10-05 08:46:55.379303] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:19.160 [2024-10-05 08:46:55.379382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:19.160 [2024-10-05 08:46:55.379408] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.160 [2024-10-05 08:46:55.379431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:19.160 [2024-10-05 08:46:55.379448] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:19.160 [2024-10-05 08:46:55.379468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:19.160 [2024-10-05 08:46:55.379485] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:19.160 [2024-10-05 08:46:55.379505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.160 [2024-10-05 08:46:55.461323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.160 BaseBdev1 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.160 [ 00:10:19.160 { 00:10:19.160 "name": "BaseBdev1", 00:10:19.160 "aliases": [ 00:10:19.160 "26b17823-38bd-447e-bea9-c1ea8ee2246b" 00:10:19.160 ], 00:10:19.160 "product_name": "Malloc disk", 00:10:19.160 "block_size": 512, 00:10:19.160 "num_blocks": 65536, 00:10:19.160 "uuid": "26b17823-38bd-447e-bea9-c1ea8ee2246b", 00:10:19.160 "assigned_rate_limits": { 00:10:19.160 "rw_ios_per_sec": 0, 00:10:19.160 "rw_mbytes_per_sec": 0, 00:10:19.160 "r_mbytes_per_sec": 0, 00:10:19.160 "w_mbytes_per_sec": 0 00:10:19.160 }, 00:10:19.160 "claimed": true, 00:10:19.160 "claim_type": "exclusive_write", 00:10:19.160 "zoned": false, 00:10:19.160 "supported_io_types": { 00:10:19.160 "read": true, 00:10:19.160 "write": true, 00:10:19.160 "unmap": true, 00:10:19.160 "flush": true, 00:10:19.160 "reset": true, 00:10:19.160 "nvme_admin": false, 00:10:19.160 "nvme_io": false, 00:10:19.160 "nvme_io_md": false, 00:10:19.160 "write_zeroes": true, 00:10:19.160 "zcopy": true, 00:10:19.160 "get_zone_info": false, 00:10:19.160 "zone_management": false, 00:10:19.160 "zone_append": false, 00:10:19.160 "compare": false, 00:10:19.160 "compare_and_write": false, 00:10:19.160 "abort": true, 00:10:19.160 "seek_hole": false, 00:10:19.160 "seek_data": false, 00:10:19.160 "copy": true, 00:10:19.160 "nvme_iov_md": false 00:10:19.160 }, 00:10:19.160 "memory_domains": [ 00:10:19.160 { 00:10:19.160 "dma_device_id": "system", 00:10:19.160 "dma_device_type": 1 00:10:19.160 }, 00:10:19.160 { 00:10:19.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.160 "dma_device_type": 2 00:10:19.160 } 00:10:19.160 ], 00:10:19.160 "driver_specific": {} 00:10:19.160 } 00:10:19.160 ] 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.160 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.160 "name": "Existed_Raid", 00:10:19.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.160 "strip_size_kb": 64, 00:10:19.160 "state": "configuring", 00:10:19.160 "raid_level": "raid0", 00:10:19.160 "superblock": false, 00:10:19.160 "num_base_bdevs": 4, 00:10:19.160 "num_base_bdevs_discovered": 1, 00:10:19.160 "num_base_bdevs_operational": 4, 00:10:19.160 "base_bdevs_list": [ 00:10:19.160 { 00:10:19.160 "name": "BaseBdev1", 00:10:19.160 "uuid": "26b17823-38bd-447e-bea9-c1ea8ee2246b", 00:10:19.160 "is_configured": true, 00:10:19.160 "data_offset": 0, 00:10:19.160 "data_size": 65536 00:10:19.160 }, 00:10:19.160 { 00:10:19.160 "name": "BaseBdev2", 00:10:19.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.160 "is_configured": false, 00:10:19.160 "data_offset": 0, 00:10:19.160 "data_size": 0 00:10:19.160 }, 00:10:19.160 { 00:10:19.160 "name": "BaseBdev3", 00:10:19.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.160 "is_configured": false, 00:10:19.161 "data_offset": 0, 00:10:19.161 "data_size": 0 00:10:19.161 }, 00:10:19.161 { 00:10:19.161 "name": "BaseBdev4", 00:10:19.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.161 "is_configured": false, 00:10:19.161 "data_offset": 0, 00:10:19.161 "data_size": 0 00:10:19.161 } 00:10:19.161 ] 00:10:19.161 }' 00:10:19.161 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.161 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.729 [2024-10-05 08:46:55.948553] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.729 [2024-10-05 08:46:55.948627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.729 [2024-10-05 08:46:55.960556] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.729 [2024-10-05 08:46:55.962676] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.729 [2024-10-05 08:46:55.962775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:19.729 [2024-10-05 08:46:55.962791] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:19.729 [2024-10-05 08:46:55.962802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:19.729 [2024-10-05 08:46:55.962809] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:19.729 [2024-10-05 08:46:55.962817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.729 08:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.729 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.729 "name": "Existed_Raid", 00:10:19.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.729 "strip_size_kb": 64, 00:10:19.729 "state": "configuring", 00:10:19.729 "raid_level": "raid0", 00:10:19.729 "superblock": false, 00:10:19.729 "num_base_bdevs": 4, 00:10:19.729 "num_base_bdevs_discovered": 1, 00:10:19.729 "num_base_bdevs_operational": 4, 00:10:19.729 "base_bdevs_list": [ 00:10:19.729 { 00:10:19.729 "name": "BaseBdev1", 00:10:19.729 "uuid": "26b17823-38bd-447e-bea9-c1ea8ee2246b", 00:10:19.729 "is_configured": true, 00:10:19.729 "data_offset": 0, 00:10:19.729 "data_size": 65536 00:10:19.729 }, 00:10:19.729 { 00:10:19.729 "name": "BaseBdev2", 00:10:19.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.729 "is_configured": false, 00:10:19.729 "data_offset": 0, 00:10:19.729 "data_size": 0 00:10:19.729 }, 00:10:19.729 { 00:10:19.729 "name": "BaseBdev3", 00:10:19.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.729 "is_configured": false, 00:10:19.729 "data_offset": 0, 00:10:19.729 "data_size": 0 00:10:19.729 }, 00:10:19.729 { 00:10:19.729 "name": "BaseBdev4", 00:10:19.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.729 "is_configured": false, 00:10:19.729 "data_offset": 0, 00:10:19.729 "data_size": 0 00:10:19.729 } 00:10:19.729 ] 00:10:19.729 }' 00:10:19.729 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.729 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.989 [2024-10-05 08:46:56.443160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.989 BaseBdev2 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.989 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.251 [ 00:10:20.251 { 00:10:20.251 "name": "BaseBdev2", 00:10:20.251 "aliases": [ 00:10:20.251 "62b10078-5522-40b8-8aad-8e211537f071" 00:10:20.251 ], 00:10:20.251 "product_name": "Malloc disk", 00:10:20.251 "block_size": 512, 00:10:20.251 "num_blocks": 65536, 00:10:20.251 "uuid": "62b10078-5522-40b8-8aad-8e211537f071", 00:10:20.251 "assigned_rate_limits": { 00:10:20.251 "rw_ios_per_sec": 0, 00:10:20.251 "rw_mbytes_per_sec": 0, 00:10:20.251 "r_mbytes_per_sec": 0, 00:10:20.251 "w_mbytes_per_sec": 0 00:10:20.251 }, 00:10:20.251 "claimed": true, 00:10:20.251 "claim_type": "exclusive_write", 00:10:20.251 "zoned": false, 00:10:20.251 "supported_io_types": { 00:10:20.251 "read": true, 00:10:20.251 "write": true, 00:10:20.251 "unmap": true, 00:10:20.251 "flush": true, 00:10:20.251 "reset": true, 00:10:20.251 "nvme_admin": false, 00:10:20.251 "nvme_io": false, 00:10:20.251 "nvme_io_md": false, 00:10:20.251 "write_zeroes": true, 00:10:20.251 "zcopy": true, 00:10:20.251 "get_zone_info": false, 00:10:20.251 "zone_management": false, 00:10:20.251 "zone_append": false, 00:10:20.251 "compare": false, 00:10:20.251 "compare_and_write": false, 00:10:20.251 "abort": true, 00:10:20.251 "seek_hole": false, 00:10:20.251 "seek_data": false, 00:10:20.251 "copy": true, 00:10:20.251 "nvme_iov_md": false 00:10:20.251 }, 00:10:20.251 "memory_domains": [ 00:10:20.251 { 00:10:20.251 "dma_device_id": "system", 00:10:20.251 "dma_device_type": 1 00:10:20.251 }, 00:10:20.251 { 00:10:20.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.251 "dma_device_type": 2 00:10:20.251 } 00:10:20.251 ], 00:10:20.251 "driver_specific": {} 00:10:20.251 } 00:10:20.251 ] 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.251 "name": "Existed_Raid", 00:10:20.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.251 "strip_size_kb": 64, 00:10:20.251 "state": "configuring", 00:10:20.251 "raid_level": "raid0", 00:10:20.251 "superblock": false, 00:10:20.251 "num_base_bdevs": 4, 00:10:20.251 "num_base_bdevs_discovered": 2, 00:10:20.251 "num_base_bdevs_operational": 4, 00:10:20.251 "base_bdevs_list": [ 00:10:20.251 { 00:10:20.251 "name": "BaseBdev1", 00:10:20.251 "uuid": "26b17823-38bd-447e-bea9-c1ea8ee2246b", 00:10:20.251 "is_configured": true, 00:10:20.251 "data_offset": 0, 00:10:20.251 "data_size": 65536 00:10:20.251 }, 00:10:20.251 { 00:10:20.251 "name": "BaseBdev2", 00:10:20.251 "uuid": "62b10078-5522-40b8-8aad-8e211537f071", 00:10:20.251 "is_configured": true, 00:10:20.251 "data_offset": 0, 00:10:20.251 "data_size": 65536 00:10:20.251 }, 00:10:20.251 { 00:10:20.251 "name": "BaseBdev3", 00:10:20.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.251 "is_configured": false, 00:10:20.251 "data_offset": 0, 00:10:20.251 "data_size": 0 00:10:20.251 }, 00:10:20.251 { 00:10:20.251 "name": "BaseBdev4", 00:10:20.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.251 "is_configured": false, 00:10:20.251 "data_offset": 0, 00:10:20.251 "data_size": 0 00:10:20.251 } 00:10:20.251 ] 00:10:20.251 }' 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.251 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.515 [2024-10-05 08:46:56.912594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.515 BaseBdev3 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.515 [ 00:10:20.515 { 00:10:20.515 "name": "BaseBdev3", 00:10:20.515 "aliases": [ 00:10:20.515 "816aa910-ad81-4f2e-9073-7f1ed1cc818c" 00:10:20.515 ], 00:10:20.515 "product_name": "Malloc disk", 00:10:20.515 "block_size": 512, 00:10:20.515 "num_blocks": 65536, 00:10:20.515 "uuid": "816aa910-ad81-4f2e-9073-7f1ed1cc818c", 00:10:20.515 "assigned_rate_limits": { 00:10:20.515 "rw_ios_per_sec": 0, 00:10:20.515 "rw_mbytes_per_sec": 0, 00:10:20.515 "r_mbytes_per_sec": 0, 00:10:20.515 "w_mbytes_per_sec": 0 00:10:20.515 }, 00:10:20.515 "claimed": true, 00:10:20.515 "claim_type": "exclusive_write", 00:10:20.515 "zoned": false, 00:10:20.515 "supported_io_types": { 00:10:20.515 "read": true, 00:10:20.515 "write": true, 00:10:20.515 "unmap": true, 00:10:20.515 "flush": true, 00:10:20.515 "reset": true, 00:10:20.515 "nvme_admin": false, 00:10:20.515 "nvme_io": false, 00:10:20.515 "nvme_io_md": false, 00:10:20.515 "write_zeroes": true, 00:10:20.515 "zcopy": true, 00:10:20.515 "get_zone_info": false, 00:10:20.515 "zone_management": false, 00:10:20.515 "zone_append": false, 00:10:20.515 "compare": false, 00:10:20.515 "compare_and_write": false, 00:10:20.515 "abort": true, 00:10:20.515 "seek_hole": false, 00:10:20.515 "seek_data": false, 00:10:20.515 "copy": true, 00:10:20.515 "nvme_iov_md": false 00:10:20.515 }, 00:10:20.515 "memory_domains": [ 00:10:20.515 { 00:10:20.515 "dma_device_id": "system", 00:10:20.515 "dma_device_type": 1 00:10:20.515 }, 00:10:20.515 { 00:10:20.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.515 "dma_device_type": 2 00:10:20.515 } 00:10:20.515 ], 00:10:20.515 "driver_specific": {} 00:10:20.515 } 00:10:20.515 ] 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.515 08:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.776 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.776 "name": "Existed_Raid", 00:10:20.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.776 "strip_size_kb": 64, 00:10:20.776 "state": "configuring", 00:10:20.776 "raid_level": "raid0", 00:10:20.776 "superblock": false, 00:10:20.776 "num_base_bdevs": 4, 00:10:20.776 "num_base_bdevs_discovered": 3, 00:10:20.776 "num_base_bdevs_operational": 4, 00:10:20.776 "base_bdevs_list": [ 00:10:20.776 { 00:10:20.776 "name": "BaseBdev1", 00:10:20.776 "uuid": "26b17823-38bd-447e-bea9-c1ea8ee2246b", 00:10:20.776 "is_configured": true, 00:10:20.776 "data_offset": 0, 00:10:20.776 "data_size": 65536 00:10:20.776 }, 00:10:20.776 { 00:10:20.776 "name": "BaseBdev2", 00:10:20.776 "uuid": "62b10078-5522-40b8-8aad-8e211537f071", 00:10:20.776 "is_configured": true, 00:10:20.776 "data_offset": 0, 00:10:20.776 "data_size": 65536 00:10:20.776 }, 00:10:20.776 { 00:10:20.776 "name": "BaseBdev3", 00:10:20.776 "uuid": "816aa910-ad81-4f2e-9073-7f1ed1cc818c", 00:10:20.776 "is_configured": true, 00:10:20.776 "data_offset": 0, 00:10:20.776 "data_size": 65536 00:10:20.776 }, 00:10:20.776 { 00:10:20.776 "name": "BaseBdev4", 00:10:20.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.776 "is_configured": false, 00:10:20.776 "data_offset": 0, 00:10:20.776 "data_size": 0 00:10:20.776 } 00:10:20.776 ] 00:10:20.777 }' 00:10:20.777 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.777 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.037 [2024-10-05 08:46:57.443520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:21.037 [2024-10-05 08:46:57.443571] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:21.037 [2024-10-05 08:46:57.443581] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:21.037 [2024-10-05 08:46:57.443882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:21.037 [2024-10-05 08:46:57.444105] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:21.037 [2024-10-05 08:46:57.444125] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:21.037 [2024-10-05 08:46:57.444408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.037 BaseBdev4 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.037 [ 00:10:21.037 { 00:10:21.037 "name": "BaseBdev4", 00:10:21.037 "aliases": [ 00:10:21.037 "e931588a-4663-41bc-9129-23cf1beaf583" 00:10:21.037 ], 00:10:21.037 "product_name": "Malloc disk", 00:10:21.037 "block_size": 512, 00:10:21.037 "num_blocks": 65536, 00:10:21.037 "uuid": "e931588a-4663-41bc-9129-23cf1beaf583", 00:10:21.037 "assigned_rate_limits": { 00:10:21.037 "rw_ios_per_sec": 0, 00:10:21.037 "rw_mbytes_per_sec": 0, 00:10:21.037 "r_mbytes_per_sec": 0, 00:10:21.037 "w_mbytes_per_sec": 0 00:10:21.037 }, 00:10:21.037 "claimed": true, 00:10:21.037 "claim_type": "exclusive_write", 00:10:21.037 "zoned": false, 00:10:21.037 "supported_io_types": { 00:10:21.037 "read": true, 00:10:21.037 "write": true, 00:10:21.037 "unmap": true, 00:10:21.037 "flush": true, 00:10:21.037 "reset": true, 00:10:21.037 "nvme_admin": false, 00:10:21.037 "nvme_io": false, 00:10:21.037 "nvme_io_md": false, 00:10:21.037 "write_zeroes": true, 00:10:21.037 "zcopy": true, 00:10:21.037 "get_zone_info": false, 00:10:21.037 "zone_management": false, 00:10:21.037 "zone_append": false, 00:10:21.037 "compare": false, 00:10:21.037 "compare_and_write": false, 00:10:21.037 "abort": true, 00:10:21.037 "seek_hole": false, 00:10:21.037 "seek_data": false, 00:10:21.037 "copy": true, 00:10:21.037 "nvme_iov_md": false 00:10:21.037 }, 00:10:21.037 "memory_domains": [ 00:10:21.037 { 00:10:21.037 "dma_device_id": "system", 00:10:21.037 "dma_device_type": 1 00:10:21.037 }, 00:10:21.037 { 00:10:21.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.037 "dma_device_type": 2 00:10:21.037 } 00:10:21.037 ], 00:10:21.037 "driver_specific": {} 00:10:21.037 } 00:10:21.037 ] 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.037 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.298 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.298 "name": "Existed_Raid", 00:10:21.298 "uuid": "cdbed375-2614-4cf4-9704-699349b5fae0", 00:10:21.298 "strip_size_kb": 64, 00:10:21.298 "state": "online", 00:10:21.298 "raid_level": "raid0", 00:10:21.298 "superblock": false, 00:10:21.298 "num_base_bdevs": 4, 00:10:21.298 "num_base_bdevs_discovered": 4, 00:10:21.298 "num_base_bdevs_operational": 4, 00:10:21.298 "base_bdevs_list": [ 00:10:21.298 { 00:10:21.298 "name": "BaseBdev1", 00:10:21.298 "uuid": "26b17823-38bd-447e-bea9-c1ea8ee2246b", 00:10:21.298 "is_configured": true, 00:10:21.298 "data_offset": 0, 00:10:21.298 "data_size": 65536 00:10:21.298 }, 00:10:21.298 { 00:10:21.298 "name": "BaseBdev2", 00:10:21.298 "uuid": "62b10078-5522-40b8-8aad-8e211537f071", 00:10:21.298 "is_configured": true, 00:10:21.298 "data_offset": 0, 00:10:21.298 "data_size": 65536 00:10:21.298 }, 00:10:21.298 { 00:10:21.298 "name": "BaseBdev3", 00:10:21.298 "uuid": "816aa910-ad81-4f2e-9073-7f1ed1cc818c", 00:10:21.298 "is_configured": true, 00:10:21.298 "data_offset": 0, 00:10:21.298 "data_size": 65536 00:10:21.298 }, 00:10:21.298 { 00:10:21.298 "name": "BaseBdev4", 00:10:21.298 "uuid": "e931588a-4663-41bc-9129-23cf1beaf583", 00:10:21.298 "is_configured": true, 00:10:21.298 "data_offset": 0, 00:10:21.298 "data_size": 65536 00:10:21.298 } 00:10:21.298 ] 00:10:21.298 }' 00:10:21.298 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.298 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.559 [2024-10-05 08:46:57.911070] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.559 "name": "Existed_Raid", 00:10:21.559 "aliases": [ 00:10:21.559 "cdbed375-2614-4cf4-9704-699349b5fae0" 00:10:21.559 ], 00:10:21.559 "product_name": "Raid Volume", 00:10:21.559 "block_size": 512, 00:10:21.559 "num_blocks": 262144, 00:10:21.559 "uuid": "cdbed375-2614-4cf4-9704-699349b5fae0", 00:10:21.559 "assigned_rate_limits": { 00:10:21.559 "rw_ios_per_sec": 0, 00:10:21.559 "rw_mbytes_per_sec": 0, 00:10:21.559 "r_mbytes_per_sec": 0, 00:10:21.559 "w_mbytes_per_sec": 0 00:10:21.559 }, 00:10:21.559 "claimed": false, 00:10:21.559 "zoned": false, 00:10:21.559 "supported_io_types": { 00:10:21.559 "read": true, 00:10:21.559 "write": true, 00:10:21.559 "unmap": true, 00:10:21.559 "flush": true, 00:10:21.559 "reset": true, 00:10:21.559 "nvme_admin": false, 00:10:21.559 "nvme_io": false, 00:10:21.559 "nvme_io_md": false, 00:10:21.559 "write_zeroes": true, 00:10:21.559 "zcopy": false, 00:10:21.559 "get_zone_info": false, 00:10:21.559 "zone_management": false, 00:10:21.559 "zone_append": false, 00:10:21.559 "compare": false, 00:10:21.559 "compare_and_write": false, 00:10:21.559 "abort": false, 00:10:21.559 "seek_hole": false, 00:10:21.559 "seek_data": false, 00:10:21.559 "copy": false, 00:10:21.559 "nvme_iov_md": false 00:10:21.559 }, 00:10:21.559 "memory_domains": [ 00:10:21.559 { 00:10:21.559 "dma_device_id": "system", 00:10:21.559 "dma_device_type": 1 00:10:21.559 }, 00:10:21.559 { 00:10:21.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.559 "dma_device_type": 2 00:10:21.559 }, 00:10:21.559 { 00:10:21.559 "dma_device_id": "system", 00:10:21.559 "dma_device_type": 1 00:10:21.559 }, 00:10:21.559 { 00:10:21.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.559 "dma_device_type": 2 00:10:21.559 }, 00:10:21.559 { 00:10:21.559 "dma_device_id": "system", 00:10:21.559 "dma_device_type": 1 00:10:21.559 }, 00:10:21.559 { 00:10:21.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.559 "dma_device_type": 2 00:10:21.559 }, 00:10:21.559 { 00:10:21.559 "dma_device_id": "system", 00:10:21.559 "dma_device_type": 1 00:10:21.559 }, 00:10:21.559 { 00:10:21.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.559 "dma_device_type": 2 00:10:21.559 } 00:10:21.559 ], 00:10:21.559 "driver_specific": { 00:10:21.559 "raid": { 00:10:21.559 "uuid": "cdbed375-2614-4cf4-9704-699349b5fae0", 00:10:21.559 "strip_size_kb": 64, 00:10:21.559 "state": "online", 00:10:21.559 "raid_level": "raid0", 00:10:21.559 "superblock": false, 00:10:21.559 "num_base_bdevs": 4, 00:10:21.559 "num_base_bdevs_discovered": 4, 00:10:21.559 "num_base_bdevs_operational": 4, 00:10:21.559 "base_bdevs_list": [ 00:10:21.559 { 00:10:21.559 "name": "BaseBdev1", 00:10:21.559 "uuid": "26b17823-38bd-447e-bea9-c1ea8ee2246b", 00:10:21.559 "is_configured": true, 00:10:21.559 "data_offset": 0, 00:10:21.559 "data_size": 65536 00:10:21.559 }, 00:10:21.559 { 00:10:21.559 "name": "BaseBdev2", 00:10:21.559 "uuid": "62b10078-5522-40b8-8aad-8e211537f071", 00:10:21.559 "is_configured": true, 00:10:21.559 "data_offset": 0, 00:10:21.559 "data_size": 65536 00:10:21.559 }, 00:10:21.559 { 00:10:21.559 "name": "BaseBdev3", 00:10:21.559 "uuid": "816aa910-ad81-4f2e-9073-7f1ed1cc818c", 00:10:21.559 "is_configured": true, 00:10:21.559 "data_offset": 0, 00:10:21.559 "data_size": 65536 00:10:21.559 }, 00:10:21.559 { 00:10:21.559 "name": "BaseBdev4", 00:10:21.559 "uuid": "e931588a-4663-41bc-9129-23cf1beaf583", 00:10:21.559 "is_configured": true, 00:10:21.559 "data_offset": 0, 00:10:21.559 "data_size": 65536 00:10:21.559 } 00:10:21.559 ] 00:10:21.559 } 00:10:21.559 } 00:10:21.559 }' 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:21.559 BaseBdev2 00:10:21.559 BaseBdev3 00:10:21.559 BaseBdev4' 00:10:21.559 08:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.559 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.559 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.819 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.819 [2024-10-05 08:46:58.234273] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.819 [2024-10-05 08:46:58.234369] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.819 [2024-10-05 08:46:58.234476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.082 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.082 "name": "Existed_Raid", 00:10:22.082 "uuid": "cdbed375-2614-4cf4-9704-699349b5fae0", 00:10:22.082 "strip_size_kb": 64, 00:10:22.082 "state": "offline", 00:10:22.082 "raid_level": "raid0", 00:10:22.082 "superblock": false, 00:10:22.083 "num_base_bdevs": 4, 00:10:22.083 "num_base_bdevs_discovered": 3, 00:10:22.083 "num_base_bdevs_operational": 3, 00:10:22.083 "base_bdevs_list": [ 00:10:22.083 { 00:10:22.083 "name": null, 00:10:22.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.083 "is_configured": false, 00:10:22.083 "data_offset": 0, 00:10:22.083 "data_size": 65536 00:10:22.083 }, 00:10:22.083 { 00:10:22.083 "name": "BaseBdev2", 00:10:22.083 "uuid": "62b10078-5522-40b8-8aad-8e211537f071", 00:10:22.083 "is_configured": true, 00:10:22.083 "data_offset": 0, 00:10:22.083 "data_size": 65536 00:10:22.083 }, 00:10:22.083 { 00:10:22.083 "name": "BaseBdev3", 00:10:22.083 "uuid": "816aa910-ad81-4f2e-9073-7f1ed1cc818c", 00:10:22.083 "is_configured": true, 00:10:22.083 "data_offset": 0, 00:10:22.083 "data_size": 65536 00:10:22.083 }, 00:10:22.083 { 00:10:22.083 "name": "BaseBdev4", 00:10:22.083 "uuid": "e931588a-4663-41bc-9129-23cf1beaf583", 00:10:22.083 "is_configured": true, 00:10:22.083 "data_offset": 0, 00:10:22.083 "data_size": 65536 00:10:22.083 } 00:10:22.083 ] 00:10:22.083 }' 00:10:22.083 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.083 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.342 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:22.342 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:22.342 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.342 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.342 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:22.342 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.602 [2024-10-05 08:46:58.853564] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.602 08:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.602 [2024-10-05 08:46:59.003153] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.862 [2024-10-05 08:46:59.150968] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:22.862 [2024-10-05 08:46:59.151083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.862 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.123 BaseBdev2 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.123 [ 00:10:23.123 { 00:10:23.123 "name": "BaseBdev2", 00:10:23.123 "aliases": [ 00:10:23.123 "63dc133b-e12d-4f11-9e1f-43005bb1e426" 00:10:23.123 ], 00:10:23.123 "product_name": "Malloc disk", 00:10:23.123 "block_size": 512, 00:10:23.123 "num_blocks": 65536, 00:10:23.123 "uuid": "63dc133b-e12d-4f11-9e1f-43005bb1e426", 00:10:23.123 "assigned_rate_limits": { 00:10:23.123 "rw_ios_per_sec": 0, 00:10:23.123 "rw_mbytes_per_sec": 0, 00:10:23.123 "r_mbytes_per_sec": 0, 00:10:23.123 "w_mbytes_per_sec": 0 00:10:23.123 }, 00:10:23.123 "claimed": false, 00:10:23.123 "zoned": false, 00:10:23.123 "supported_io_types": { 00:10:23.123 "read": true, 00:10:23.123 "write": true, 00:10:23.123 "unmap": true, 00:10:23.123 "flush": true, 00:10:23.123 "reset": true, 00:10:23.123 "nvme_admin": false, 00:10:23.123 "nvme_io": false, 00:10:23.123 "nvme_io_md": false, 00:10:23.123 "write_zeroes": true, 00:10:23.123 "zcopy": true, 00:10:23.123 "get_zone_info": false, 00:10:23.123 "zone_management": false, 00:10:23.123 "zone_append": false, 00:10:23.123 "compare": false, 00:10:23.123 "compare_and_write": false, 00:10:23.123 "abort": true, 00:10:23.123 "seek_hole": false, 00:10:23.123 "seek_data": false, 00:10:23.123 "copy": true, 00:10:23.123 "nvme_iov_md": false 00:10:23.123 }, 00:10:23.123 "memory_domains": [ 00:10:23.123 { 00:10:23.123 "dma_device_id": "system", 00:10:23.123 "dma_device_type": 1 00:10:23.123 }, 00:10:23.123 { 00:10:23.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.123 "dma_device_type": 2 00:10:23.123 } 00:10:23.123 ], 00:10:23.123 "driver_specific": {} 00:10:23.123 } 00:10:23.123 ] 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.123 BaseBdev3 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.123 [ 00:10:23.123 { 00:10:23.123 "name": "BaseBdev3", 00:10:23.123 "aliases": [ 00:10:23.123 "8c9bab71-a037-4cc9-a7ba-97564af60588" 00:10:23.123 ], 00:10:23.123 "product_name": "Malloc disk", 00:10:23.123 "block_size": 512, 00:10:23.123 "num_blocks": 65536, 00:10:23.123 "uuid": "8c9bab71-a037-4cc9-a7ba-97564af60588", 00:10:23.123 "assigned_rate_limits": { 00:10:23.123 "rw_ios_per_sec": 0, 00:10:23.123 "rw_mbytes_per_sec": 0, 00:10:23.123 "r_mbytes_per_sec": 0, 00:10:23.123 "w_mbytes_per_sec": 0 00:10:23.123 }, 00:10:23.123 "claimed": false, 00:10:23.123 "zoned": false, 00:10:23.123 "supported_io_types": { 00:10:23.123 "read": true, 00:10:23.123 "write": true, 00:10:23.123 "unmap": true, 00:10:23.123 "flush": true, 00:10:23.123 "reset": true, 00:10:23.123 "nvme_admin": false, 00:10:23.123 "nvme_io": false, 00:10:23.123 "nvme_io_md": false, 00:10:23.123 "write_zeroes": true, 00:10:23.123 "zcopy": true, 00:10:23.123 "get_zone_info": false, 00:10:23.123 "zone_management": false, 00:10:23.123 "zone_append": false, 00:10:23.123 "compare": false, 00:10:23.123 "compare_and_write": false, 00:10:23.123 "abort": true, 00:10:23.123 "seek_hole": false, 00:10:23.123 "seek_data": false, 00:10:23.123 "copy": true, 00:10:23.123 "nvme_iov_md": false 00:10:23.123 }, 00:10:23.123 "memory_domains": [ 00:10:23.123 { 00:10:23.123 "dma_device_id": "system", 00:10:23.123 "dma_device_type": 1 00:10:23.123 }, 00:10:23.123 { 00:10:23.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.123 "dma_device_type": 2 00:10:23.123 } 00:10:23.123 ], 00:10:23.123 "driver_specific": {} 00:10:23.123 } 00:10:23.123 ] 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:23.123 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.124 BaseBdev4 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.124 [ 00:10:23.124 { 00:10:23.124 "name": "BaseBdev4", 00:10:23.124 "aliases": [ 00:10:23.124 "9206573b-3e29-4fcf-8d70-499e34030382" 00:10:23.124 ], 00:10:23.124 "product_name": "Malloc disk", 00:10:23.124 "block_size": 512, 00:10:23.124 "num_blocks": 65536, 00:10:23.124 "uuid": "9206573b-3e29-4fcf-8d70-499e34030382", 00:10:23.124 "assigned_rate_limits": { 00:10:23.124 "rw_ios_per_sec": 0, 00:10:23.124 "rw_mbytes_per_sec": 0, 00:10:23.124 "r_mbytes_per_sec": 0, 00:10:23.124 "w_mbytes_per_sec": 0 00:10:23.124 }, 00:10:23.124 "claimed": false, 00:10:23.124 "zoned": false, 00:10:23.124 "supported_io_types": { 00:10:23.124 "read": true, 00:10:23.124 "write": true, 00:10:23.124 "unmap": true, 00:10:23.124 "flush": true, 00:10:23.124 "reset": true, 00:10:23.124 "nvme_admin": false, 00:10:23.124 "nvme_io": false, 00:10:23.124 "nvme_io_md": false, 00:10:23.124 "write_zeroes": true, 00:10:23.124 "zcopy": true, 00:10:23.124 "get_zone_info": false, 00:10:23.124 "zone_management": false, 00:10:23.124 "zone_append": false, 00:10:23.124 "compare": false, 00:10:23.124 "compare_and_write": false, 00:10:23.124 "abort": true, 00:10:23.124 "seek_hole": false, 00:10:23.124 "seek_data": false, 00:10:23.124 "copy": true, 00:10:23.124 "nvme_iov_md": false 00:10:23.124 }, 00:10:23.124 "memory_domains": [ 00:10:23.124 { 00:10:23.124 "dma_device_id": "system", 00:10:23.124 "dma_device_type": 1 00:10:23.124 }, 00:10:23.124 { 00:10:23.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.124 "dma_device_type": 2 00:10:23.124 } 00:10:23.124 ], 00:10:23.124 "driver_specific": {} 00:10:23.124 } 00:10:23.124 ] 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.124 [2024-10-05 08:46:59.551259] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.124 [2024-10-05 08:46:59.551378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.124 [2024-10-05 08:46:59.551432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.124 [2024-10-05 08:46:59.553484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.124 [2024-10-05 08:46:59.553579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.124 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.385 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.385 "name": "Existed_Raid", 00:10:23.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.385 "strip_size_kb": 64, 00:10:23.385 "state": "configuring", 00:10:23.385 "raid_level": "raid0", 00:10:23.385 "superblock": false, 00:10:23.385 "num_base_bdevs": 4, 00:10:23.385 "num_base_bdevs_discovered": 3, 00:10:23.385 "num_base_bdevs_operational": 4, 00:10:23.385 "base_bdevs_list": [ 00:10:23.385 { 00:10:23.385 "name": "BaseBdev1", 00:10:23.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.385 "is_configured": false, 00:10:23.385 "data_offset": 0, 00:10:23.385 "data_size": 0 00:10:23.385 }, 00:10:23.385 { 00:10:23.385 "name": "BaseBdev2", 00:10:23.385 "uuid": "63dc133b-e12d-4f11-9e1f-43005bb1e426", 00:10:23.385 "is_configured": true, 00:10:23.385 "data_offset": 0, 00:10:23.385 "data_size": 65536 00:10:23.385 }, 00:10:23.385 { 00:10:23.385 "name": "BaseBdev3", 00:10:23.385 "uuid": "8c9bab71-a037-4cc9-a7ba-97564af60588", 00:10:23.385 "is_configured": true, 00:10:23.385 "data_offset": 0, 00:10:23.385 "data_size": 65536 00:10:23.385 }, 00:10:23.385 { 00:10:23.385 "name": "BaseBdev4", 00:10:23.385 "uuid": "9206573b-3e29-4fcf-8d70-499e34030382", 00:10:23.385 "is_configured": true, 00:10:23.385 "data_offset": 0, 00:10:23.385 "data_size": 65536 00:10:23.385 } 00:10:23.385 ] 00:10:23.385 }' 00:10:23.385 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.385 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.645 [2024-10-05 08:46:59.962623] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.645 08:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.645 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.645 "name": "Existed_Raid", 00:10:23.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.645 "strip_size_kb": 64, 00:10:23.645 "state": "configuring", 00:10:23.645 "raid_level": "raid0", 00:10:23.645 "superblock": false, 00:10:23.645 "num_base_bdevs": 4, 00:10:23.645 "num_base_bdevs_discovered": 2, 00:10:23.645 "num_base_bdevs_operational": 4, 00:10:23.645 "base_bdevs_list": [ 00:10:23.645 { 00:10:23.645 "name": "BaseBdev1", 00:10:23.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.645 "is_configured": false, 00:10:23.645 "data_offset": 0, 00:10:23.645 "data_size": 0 00:10:23.645 }, 00:10:23.645 { 00:10:23.645 "name": null, 00:10:23.645 "uuid": "63dc133b-e12d-4f11-9e1f-43005bb1e426", 00:10:23.645 "is_configured": false, 00:10:23.645 "data_offset": 0, 00:10:23.645 "data_size": 65536 00:10:23.645 }, 00:10:23.645 { 00:10:23.645 "name": "BaseBdev3", 00:10:23.645 "uuid": "8c9bab71-a037-4cc9-a7ba-97564af60588", 00:10:23.645 "is_configured": true, 00:10:23.645 "data_offset": 0, 00:10:23.645 "data_size": 65536 00:10:23.645 }, 00:10:23.645 { 00:10:23.645 "name": "BaseBdev4", 00:10:23.645 "uuid": "9206573b-3e29-4fcf-8d70-499e34030382", 00:10:23.645 "is_configured": true, 00:10:23.645 "data_offset": 0, 00:10:23.645 "data_size": 65536 00:10:23.645 } 00:10:23.645 ] 00:10:23.645 }' 00:10:23.645 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.645 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.215 [2024-10-05 08:47:00.485258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.215 BaseBdev1 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.215 [ 00:10:24.215 { 00:10:24.215 "name": "BaseBdev1", 00:10:24.215 "aliases": [ 00:10:24.215 "1b7c84a7-7f68-4f24-bdb9-156ff466968c" 00:10:24.215 ], 00:10:24.215 "product_name": "Malloc disk", 00:10:24.215 "block_size": 512, 00:10:24.215 "num_blocks": 65536, 00:10:24.215 "uuid": "1b7c84a7-7f68-4f24-bdb9-156ff466968c", 00:10:24.215 "assigned_rate_limits": { 00:10:24.215 "rw_ios_per_sec": 0, 00:10:24.215 "rw_mbytes_per_sec": 0, 00:10:24.215 "r_mbytes_per_sec": 0, 00:10:24.215 "w_mbytes_per_sec": 0 00:10:24.215 }, 00:10:24.215 "claimed": true, 00:10:24.215 "claim_type": "exclusive_write", 00:10:24.215 "zoned": false, 00:10:24.215 "supported_io_types": { 00:10:24.215 "read": true, 00:10:24.215 "write": true, 00:10:24.215 "unmap": true, 00:10:24.215 "flush": true, 00:10:24.215 "reset": true, 00:10:24.215 "nvme_admin": false, 00:10:24.215 "nvme_io": false, 00:10:24.215 "nvme_io_md": false, 00:10:24.215 "write_zeroes": true, 00:10:24.215 "zcopy": true, 00:10:24.215 "get_zone_info": false, 00:10:24.215 "zone_management": false, 00:10:24.215 "zone_append": false, 00:10:24.215 "compare": false, 00:10:24.215 "compare_and_write": false, 00:10:24.215 "abort": true, 00:10:24.215 "seek_hole": false, 00:10:24.215 "seek_data": false, 00:10:24.215 "copy": true, 00:10:24.215 "nvme_iov_md": false 00:10:24.215 }, 00:10:24.215 "memory_domains": [ 00:10:24.215 { 00:10:24.215 "dma_device_id": "system", 00:10:24.215 "dma_device_type": 1 00:10:24.215 }, 00:10:24.215 { 00:10:24.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.215 "dma_device_type": 2 00:10:24.215 } 00:10:24.215 ], 00:10:24.215 "driver_specific": {} 00:10:24.215 } 00:10:24.215 ] 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.215 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.215 "name": "Existed_Raid", 00:10:24.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.215 "strip_size_kb": 64, 00:10:24.215 "state": "configuring", 00:10:24.215 "raid_level": "raid0", 00:10:24.215 "superblock": false, 00:10:24.215 "num_base_bdevs": 4, 00:10:24.215 "num_base_bdevs_discovered": 3, 00:10:24.215 "num_base_bdevs_operational": 4, 00:10:24.215 "base_bdevs_list": [ 00:10:24.215 { 00:10:24.215 "name": "BaseBdev1", 00:10:24.216 "uuid": "1b7c84a7-7f68-4f24-bdb9-156ff466968c", 00:10:24.216 "is_configured": true, 00:10:24.216 "data_offset": 0, 00:10:24.216 "data_size": 65536 00:10:24.216 }, 00:10:24.216 { 00:10:24.216 "name": null, 00:10:24.216 "uuid": "63dc133b-e12d-4f11-9e1f-43005bb1e426", 00:10:24.216 "is_configured": false, 00:10:24.216 "data_offset": 0, 00:10:24.216 "data_size": 65536 00:10:24.216 }, 00:10:24.216 { 00:10:24.216 "name": "BaseBdev3", 00:10:24.216 "uuid": "8c9bab71-a037-4cc9-a7ba-97564af60588", 00:10:24.216 "is_configured": true, 00:10:24.216 "data_offset": 0, 00:10:24.216 "data_size": 65536 00:10:24.216 }, 00:10:24.216 { 00:10:24.216 "name": "BaseBdev4", 00:10:24.216 "uuid": "9206573b-3e29-4fcf-8d70-499e34030382", 00:10:24.216 "is_configured": true, 00:10:24.216 "data_offset": 0, 00:10:24.216 "data_size": 65536 00:10:24.216 } 00:10:24.216 ] 00:10:24.216 }' 00:10:24.216 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.216 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.785 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.785 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.785 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.785 08:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:24.785 08:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.785 [2024-10-05 08:47:01.024396] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.785 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.786 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.786 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.786 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.786 "name": "Existed_Raid", 00:10:24.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.786 "strip_size_kb": 64, 00:10:24.786 "state": "configuring", 00:10:24.786 "raid_level": "raid0", 00:10:24.786 "superblock": false, 00:10:24.786 "num_base_bdevs": 4, 00:10:24.786 "num_base_bdevs_discovered": 2, 00:10:24.786 "num_base_bdevs_operational": 4, 00:10:24.786 "base_bdevs_list": [ 00:10:24.786 { 00:10:24.786 "name": "BaseBdev1", 00:10:24.786 "uuid": "1b7c84a7-7f68-4f24-bdb9-156ff466968c", 00:10:24.786 "is_configured": true, 00:10:24.786 "data_offset": 0, 00:10:24.786 "data_size": 65536 00:10:24.786 }, 00:10:24.786 { 00:10:24.786 "name": null, 00:10:24.786 "uuid": "63dc133b-e12d-4f11-9e1f-43005bb1e426", 00:10:24.786 "is_configured": false, 00:10:24.786 "data_offset": 0, 00:10:24.786 "data_size": 65536 00:10:24.786 }, 00:10:24.786 { 00:10:24.786 "name": null, 00:10:24.786 "uuid": "8c9bab71-a037-4cc9-a7ba-97564af60588", 00:10:24.786 "is_configured": false, 00:10:24.786 "data_offset": 0, 00:10:24.786 "data_size": 65536 00:10:24.786 }, 00:10:24.786 { 00:10:24.786 "name": "BaseBdev4", 00:10:24.786 "uuid": "9206573b-3e29-4fcf-8d70-499e34030382", 00:10:24.786 "is_configured": true, 00:10:24.786 "data_offset": 0, 00:10:24.786 "data_size": 65536 00:10:24.786 } 00:10:24.786 ] 00:10:24.786 }' 00:10:24.786 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.786 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.045 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.045 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.045 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.045 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:25.045 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.306 [2024-10-05 08:47:01.531541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.306 "name": "Existed_Raid", 00:10:25.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.306 "strip_size_kb": 64, 00:10:25.306 "state": "configuring", 00:10:25.306 "raid_level": "raid0", 00:10:25.306 "superblock": false, 00:10:25.306 "num_base_bdevs": 4, 00:10:25.306 "num_base_bdevs_discovered": 3, 00:10:25.306 "num_base_bdevs_operational": 4, 00:10:25.306 "base_bdevs_list": [ 00:10:25.306 { 00:10:25.306 "name": "BaseBdev1", 00:10:25.306 "uuid": "1b7c84a7-7f68-4f24-bdb9-156ff466968c", 00:10:25.306 "is_configured": true, 00:10:25.306 "data_offset": 0, 00:10:25.306 "data_size": 65536 00:10:25.306 }, 00:10:25.306 { 00:10:25.306 "name": null, 00:10:25.306 "uuid": "63dc133b-e12d-4f11-9e1f-43005bb1e426", 00:10:25.306 "is_configured": false, 00:10:25.306 "data_offset": 0, 00:10:25.306 "data_size": 65536 00:10:25.306 }, 00:10:25.306 { 00:10:25.306 "name": "BaseBdev3", 00:10:25.306 "uuid": "8c9bab71-a037-4cc9-a7ba-97564af60588", 00:10:25.306 "is_configured": true, 00:10:25.306 "data_offset": 0, 00:10:25.306 "data_size": 65536 00:10:25.306 }, 00:10:25.306 { 00:10:25.306 "name": "BaseBdev4", 00:10:25.306 "uuid": "9206573b-3e29-4fcf-8d70-499e34030382", 00:10:25.306 "is_configured": true, 00:10:25.306 "data_offset": 0, 00:10:25.306 "data_size": 65536 00:10:25.306 } 00:10:25.306 ] 00:10:25.306 }' 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.306 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.566 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:25.566 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.566 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.566 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.566 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.566 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:25.566 08:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:25.566 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.566 08:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.566 [2024-10-05 08:47:01.990750] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.825 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.825 "name": "Existed_Raid", 00:10:25.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.825 "strip_size_kb": 64, 00:10:25.825 "state": "configuring", 00:10:25.825 "raid_level": "raid0", 00:10:25.825 "superblock": false, 00:10:25.825 "num_base_bdevs": 4, 00:10:25.825 "num_base_bdevs_discovered": 2, 00:10:25.825 "num_base_bdevs_operational": 4, 00:10:25.825 "base_bdevs_list": [ 00:10:25.825 { 00:10:25.825 "name": null, 00:10:25.825 "uuid": "1b7c84a7-7f68-4f24-bdb9-156ff466968c", 00:10:25.825 "is_configured": false, 00:10:25.825 "data_offset": 0, 00:10:25.825 "data_size": 65536 00:10:25.825 }, 00:10:25.825 { 00:10:25.825 "name": null, 00:10:25.826 "uuid": "63dc133b-e12d-4f11-9e1f-43005bb1e426", 00:10:25.826 "is_configured": false, 00:10:25.826 "data_offset": 0, 00:10:25.826 "data_size": 65536 00:10:25.826 }, 00:10:25.826 { 00:10:25.826 "name": "BaseBdev3", 00:10:25.826 "uuid": "8c9bab71-a037-4cc9-a7ba-97564af60588", 00:10:25.826 "is_configured": true, 00:10:25.826 "data_offset": 0, 00:10:25.826 "data_size": 65536 00:10:25.826 }, 00:10:25.826 { 00:10:25.826 "name": "BaseBdev4", 00:10:25.826 "uuid": "9206573b-3e29-4fcf-8d70-499e34030382", 00:10:25.826 "is_configured": true, 00:10:25.826 "data_offset": 0, 00:10:25.826 "data_size": 65536 00:10:25.826 } 00:10:25.826 ] 00:10:25.826 }' 00:10:25.826 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.826 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.085 [2024-10-05 08:47:02.539971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.085 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.344 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.344 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.344 "name": "Existed_Raid", 00:10:26.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.344 "strip_size_kb": 64, 00:10:26.344 "state": "configuring", 00:10:26.344 "raid_level": "raid0", 00:10:26.345 "superblock": false, 00:10:26.345 "num_base_bdevs": 4, 00:10:26.345 "num_base_bdevs_discovered": 3, 00:10:26.345 "num_base_bdevs_operational": 4, 00:10:26.345 "base_bdevs_list": [ 00:10:26.345 { 00:10:26.345 "name": null, 00:10:26.345 "uuid": "1b7c84a7-7f68-4f24-bdb9-156ff466968c", 00:10:26.345 "is_configured": false, 00:10:26.345 "data_offset": 0, 00:10:26.345 "data_size": 65536 00:10:26.345 }, 00:10:26.345 { 00:10:26.345 "name": "BaseBdev2", 00:10:26.345 "uuid": "63dc133b-e12d-4f11-9e1f-43005bb1e426", 00:10:26.345 "is_configured": true, 00:10:26.345 "data_offset": 0, 00:10:26.345 "data_size": 65536 00:10:26.345 }, 00:10:26.345 { 00:10:26.345 "name": "BaseBdev3", 00:10:26.345 "uuid": "8c9bab71-a037-4cc9-a7ba-97564af60588", 00:10:26.345 "is_configured": true, 00:10:26.345 "data_offset": 0, 00:10:26.345 "data_size": 65536 00:10:26.345 }, 00:10:26.345 { 00:10:26.345 "name": "BaseBdev4", 00:10:26.345 "uuid": "9206573b-3e29-4fcf-8d70-499e34030382", 00:10:26.345 "is_configured": true, 00:10:26.345 "data_offset": 0, 00:10:26.345 "data_size": 65536 00:10:26.345 } 00:10:26.345 ] 00:10:26.345 }' 00:10:26.345 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.345 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.606 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.606 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:26.606 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.606 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.606 08:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.606 08:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:26.606 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.606 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.606 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:26.606 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.606 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.606 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1b7c84a7-7f68-4f24-bdb9-156ff466968c 00:10:26.606 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.606 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.865 [2024-10-05 08:47:03.088455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:26.865 [2024-10-05 08:47:03.088567] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:26.865 [2024-10-05 08:47:03.088591] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:26.865 [2024-10-05 08:47:03.088910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:26.865 [2024-10-05 08:47:03.089142] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:26.865 [2024-10-05 08:47:03.089186] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:26.865 [2024-10-05 08:47:03.089491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.865 NewBaseBdev 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.865 [ 00:10:26.865 { 00:10:26.865 "name": "NewBaseBdev", 00:10:26.865 "aliases": [ 00:10:26.865 "1b7c84a7-7f68-4f24-bdb9-156ff466968c" 00:10:26.865 ], 00:10:26.865 "product_name": "Malloc disk", 00:10:26.865 "block_size": 512, 00:10:26.865 "num_blocks": 65536, 00:10:26.865 "uuid": "1b7c84a7-7f68-4f24-bdb9-156ff466968c", 00:10:26.865 "assigned_rate_limits": { 00:10:26.865 "rw_ios_per_sec": 0, 00:10:26.865 "rw_mbytes_per_sec": 0, 00:10:26.865 "r_mbytes_per_sec": 0, 00:10:26.865 "w_mbytes_per_sec": 0 00:10:26.865 }, 00:10:26.865 "claimed": true, 00:10:26.865 "claim_type": "exclusive_write", 00:10:26.865 "zoned": false, 00:10:26.865 "supported_io_types": { 00:10:26.865 "read": true, 00:10:26.865 "write": true, 00:10:26.865 "unmap": true, 00:10:26.865 "flush": true, 00:10:26.865 "reset": true, 00:10:26.865 "nvme_admin": false, 00:10:26.865 "nvme_io": false, 00:10:26.865 "nvme_io_md": false, 00:10:26.865 "write_zeroes": true, 00:10:26.865 "zcopy": true, 00:10:26.865 "get_zone_info": false, 00:10:26.865 "zone_management": false, 00:10:26.865 "zone_append": false, 00:10:26.865 "compare": false, 00:10:26.865 "compare_and_write": false, 00:10:26.865 "abort": true, 00:10:26.865 "seek_hole": false, 00:10:26.865 "seek_data": false, 00:10:26.865 "copy": true, 00:10:26.865 "nvme_iov_md": false 00:10:26.865 }, 00:10:26.865 "memory_domains": [ 00:10:26.865 { 00:10:26.865 "dma_device_id": "system", 00:10:26.865 "dma_device_type": 1 00:10:26.865 }, 00:10:26.865 { 00:10:26.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.865 "dma_device_type": 2 00:10:26.865 } 00:10:26.865 ], 00:10:26.865 "driver_specific": {} 00:10:26.865 } 00:10:26.865 ] 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.865 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.865 "name": "Existed_Raid", 00:10:26.865 "uuid": "b296a500-3002-47d3-8425-5b1219a808c0", 00:10:26.865 "strip_size_kb": 64, 00:10:26.865 "state": "online", 00:10:26.865 "raid_level": "raid0", 00:10:26.865 "superblock": false, 00:10:26.865 "num_base_bdevs": 4, 00:10:26.865 "num_base_bdevs_discovered": 4, 00:10:26.865 "num_base_bdevs_operational": 4, 00:10:26.865 "base_bdevs_list": [ 00:10:26.865 { 00:10:26.865 "name": "NewBaseBdev", 00:10:26.865 "uuid": "1b7c84a7-7f68-4f24-bdb9-156ff466968c", 00:10:26.865 "is_configured": true, 00:10:26.865 "data_offset": 0, 00:10:26.865 "data_size": 65536 00:10:26.865 }, 00:10:26.866 { 00:10:26.866 "name": "BaseBdev2", 00:10:26.866 "uuid": "63dc133b-e12d-4f11-9e1f-43005bb1e426", 00:10:26.866 "is_configured": true, 00:10:26.866 "data_offset": 0, 00:10:26.866 "data_size": 65536 00:10:26.866 }, 00:10:26.866 { 00:10:26.866 "name": "BaseBdev3", 00:10:26.866 "uuid": "8c9bab71-a037-4cc9-a7ba-97564af60588", 00:10:26.866 "is_configured": true, 00:10:26.866 "data_offset": 0, 00:10:26.866 "data_size": 65536 00:10:26.866 }, 00:10:26.866 { 00:10:26.866 "name": "BaseBdev4", 00:10:26.866 "uuid": "9206573b-3e29-4fcf-8d70-499e34030382", 00:10:26.866 "is_configured": true, 00:10:26.866 "data_offset": 0, 00:10:26.866 "data_size": 65536 00:10:26.866 } 00:10:26.866 ] 00:10:26.866 }' 00:10:26.866 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.866 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.125 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:27.125 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:27.125 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.125 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.125 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.125 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.125 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:27.125 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.125 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.125 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.125 [2024-10-05 08:47:03.560017] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.125 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.385 "name": "Existed_Raid", 00:10:27.385 "aliases": [ 00:10:27.385 "b296a500-3002-47d3-8425-5b1219a808c0" 00:10:27.385 ], 00:10:27.385 "product_name": "Raid Volume", 00:10:27.385 "block_size": 512, 00:10:27.385 "num_blocks": 262144, 00:10:27.385 "uuid": "b296a500-3002-47d3-8425-5b1219a808c0", 00:10:27.385 "assigned_rate_limits": { 00:10:27.385 "rw_ios_per_sec": 0, 00:10:27.385 "rw_mbytes_per_sec": 0, 00:10:27.385 "r_mbytes_per_sec": 0, 00:10:27.385 "w_mbytes_per_sec": 0 00:10:27.385 }, 00:10:27.385 "claimed": false, 00:10:27.385 "zoned": false, 00:10:27.385 "supported_io_types": { 00:10:27.385 "read": true, 00:10:27.385 "write": true, 00:10:27.385 "unmap": true, 00:10:27.385 "flush": true, 00:10:27.385 "reset": true, 00:10:27.385 "nvme_admin": false, 00:10:27.385 "nvme_io": false, 00:10:27.385 "nvme_io_md": false, 00:10:27.385 "write_zeroes": true, 00:10:27.385 "zcopy": false, 00:10:27.385 "get_zone_info": false, 00:10:27.385 "zone_management": false, 00:10:27.385 "zone_append": false, 00:10:27.385 "compare": false, 00:10:27.385 "compare_and_write": false, 00:10:27.385 "abort": false, 00:10:27.385 "seek_hole": false, 00:10:27.385 "seek_data": false, 00:10:27.385 "copy": false, 00:10:27.385 "nvme_iov_md": false 00:10:27.385 }, 00:10:27.385 "memory_domains": [ 00:10:27.385 { 00:10:27.385 "dma_device_id": "system", 00:10:27.385 "dma_device_type": 1 00:10:27.385 }, 00:10:27.385 { 00:10:27.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.385 "dma_device_type": 2 00:10:27.385 }, 00:10:27.385 { 00:10:27.385 "dma_device_id": "system", 00:10:27.385 "dma_device_type": 1 00:10:27.385 }, 00:10:27.385 { 00:10:27.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.385 "dma_device_type": 2 00:10:27.385 }, 00:10:27.385 { 00:10:27.385 "dma_device_id": "system", 00:10:27.385 "dma_device_type": 1 00:10:27.385 }, 00:10:27.385 { 00:10:27.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.385 "dma_device_type": 2 00:10:27.385 }, 00:10:27.385 { 00:10:27.385 "dma_device_id": "system", 00:10:27.385 "dma_device_type": 1 00:10:27.385 }, 00:10:27.385 { 00:10:27.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.385 "dma_device_type": 2 00:10:27.385 } 00:10:27.385 ], 00:10:27.385 "driver_specific": { 00:10:27.385 "raid": { 00:10:27.385 "uuid": "b296a500-3002-47d3-8425-5b1219a808c0", 00:10:27.385 "strip_size_kb": 64, 00:10:27.385 "state": "online", 00:10:27.385 "raid_level": "raid0", 00:10:27.385 "superblock": false, 00:10:27.385 "num_base_bdevs": 4, 00:10:27.385 "num_base_bdevs_discovered": 4, 00:10:27.385 "num_base_bdevs_operational": 4, 00:10:27.385 "base_bdevs_list": [ 00:10:27.385 { 00:10:27.385 "name": "NewBaseBdev", 00:10:27.385 "uuid": "1b7c84a7-7f68-4f24-bdb9-156ff466968c", 00:10:27.385 "is_configured": true, 00:10:27.385 "data_offset": 0, 00:10:27.385 "data_size": 65536 00:10:27.385 }, 00:10:27.385 { 00:10:27.385 "name": "BaseBdev2", 00:10:27.385 "uuid": "63dc133b-e12d-4f11-9e1f-43005bb1e426", 00:10:27.385 "is_configured": true, 00:10:27.385 "data_offset": 0, 00:10:27.385 "data_size": 65536 00:10:27.385 }, 00:10:27.385 { 00:10:27.385 "name": "BaseBdev3", 00:10:27.385 "uuid": "8c9bab71-a037-4cc9-a7ba-97564af60588", 00:10:27.385 "is_configured": true, 00:10:27.385 "data_offset": 0, 00:10:27.385 "data_size": 65536 00:10:27.385 }, 00:10:27.385 { 00:10:27.385 "name": "BaseBdev4", 00:10:27.385 "uuid": "9206573b-3e29-4fcf-8d70-499e34030382", 00:10:27.385 "is_configured": true, 00:10:27.385 "data_offset": 0, 00:10:27.385 "data_size": 65536 00:10:27.385 } 00:10:27.385 ] 00:10:27.385 } 00:10:27.385 } 00:10:27.385 }' 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:27.385 BaseBdev2 00:10:27.385 BaseBdev3 00:10:27.385 BaseBdev4' 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.385 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.645 [2024-10-05 08:47:03.879112] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.645 [2024-10-05 08:47:03.879186] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.645 [2024-10-05 08:47:03.879266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.645 [2024-10-05 08:47:03.879342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.645 [2024-10-05 08:47:03.879352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 68225 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 68225 ']' 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 68225 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68225 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68225' 00:10:27.645 killing process with pid 68225 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 68225 00:10:27.645 [2024-10-05 08:47:03.924897] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.645 08:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 68225 00:10:27.904 [2024-10-05 08:47:04.347811] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.285 08:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:29.286 00:10:29.286 real 0m11.645s 00:10:29.286 user 0m18.071s 00:10:29.286 sys 0m2.234s 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.286 ************************************ 00:10:29.286 END TEST raid_state_function_test 00:10:29.286 ************************************ 00:10:29.286 08:47:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:29.286 08:47:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:29.286 08:47:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.286 08:47:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.286 ************************************ 00:10:29.286 START TEST raid_state_function_test_sb 00:10:29.286 ************************************ 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:29.286 Process raid pid: 68829 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68829 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68829' 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68829 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 68829 ']' 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.286 08:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.546 08:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.546 08:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.546 08:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.546 [2024-10-05 08:47:05.844126] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:29.546 [2024-10-05 08:47:05.844349] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.546 [2024-10-05 08:47:06.007584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.806 [2024-10-05 08:47:06.251390] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.066 [2024-10-05 08:47:06.487164] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.066 [2024-10-05 08:47:06.487216] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.326 [2024-10-05 08:47:06.670724] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.326 [2024-10-05 08:47:06.670787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.326 [2024-10-05 08:47:06.670796] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.326 [2024-10-05 08:47:06.670806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.326 [2024-10-05 08:47:06.670813] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:30.326 [2024-10-05 08:47:06.670822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:30.326 [2024-10-05 08:47:06.670828] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:30.326 [2024-10-05 08:47:06.670836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.326 "name": "Existed_Raid", 00:10:30.326 "uuid": "bd2d2437-ce8a-45a3-97f8-5d6a2c537562", 00:10:30.326 "strip_size_kb": 64, 00:10:30.326 "state": "configuring", 00:10:30.326 "raid_level": "raid0", 00:10:30.326 "superblock": true, 00:10:30.326 "num_base_bdevs": 4, 00:10:30.326 "num_base_bdevs_discovered": 0, 00:10:30.326 "num_base_bdevs_operational": 4, 00:10:30.326 "base_bdevs_list": [ 00:10:30.326 { 00:10:30.326 "name": "BaseBdev1", 00:10:30.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.326 "is_configured": false, 00:10:30.326 "data_offset": 0, 00:10:30.326 "data_size": 0 00:10:30.326 }, 00:10:30.326 { 00:10:30.326 "name": "BaseBdev2", 00:10:30.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.326 "is_configured": false, 00:10:30.326 "data_offset": 0, 00:10:30.326 "data_size": 0 00:10:30.326 }, 00:10:30.326 { 00:10:30.326 "name": "BaseBdev3", 00:10:30.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.326 "is_configured": false, 00:10:30.326 "data_offset": 0, 00:10:30.326 "data_size": 0 00:10:30.326 }, 00:10:30.326 { 00:10:30.326 "name": "BaseBdev4", 00:10:30.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.326 "is_configured": false, 00:10:30.326 "data_offset": 0, 00:10:30.326 "data_size": 0 00:10:30.326 } 00:10:30.326 ] 00:10:30.326 }' 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.326 08:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.896 [2024-10-05 08:47:07.093894] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:30.896 [2024-10-05 08:47:07.094033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.896 [2024-10-05 08:47:07.101922] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.896 [2024-10-05 08:47:07.101975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.896 [2024-10-05 08:47:07.101985] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.896 [2024-10-05 08:47:07.101995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.896 [2024-10-05 08:47:07.102001] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:30.896 [2024-10-05 08:47:07.102010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:30.896 [2024-10-05 08:47:07.102016] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:30.896 [2024-10-05 08:47:07.102026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.896 [2024-10-05 08:47:07.185611] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.896 BaseBdev1 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.896 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.896 [ 00:10:30.896 { 00:10:30.896 "name": "BaseBdev1", 00:10:30.896 "aliases": [ 00:10:30.896 "275e826b-1d5b-439c-86de-8b2779040be7" 00:10:30.896 ], 00:10:30.896 "product_name": "Malloc disk", 00:10:30.896 "block_size": 512, 00:10:30.896 "num_blocks": 65536, 00:10:30.896 "uuid": "275e826b-1d5b-439c-86de-8b2779040be7", 00:10:30.896 "assigned_rate_limits": { 00:10:30.896 "rw_ios_per_sec": 0, 00:10:30.896 "rw_mbytes_per_sec": 0, 00:10:30.896 "r_mbytes_per_sec": 0, 00:10:30.896 "w_mbytes_per_sec": 0 00:10:30.896 }, 00:10:30.896 "claimed": true, 00:10:30.896 "claim_type": "exclusive_write", 00:10:30.896 "zoned": false, 00:10:30.896 "supported_io_types": { 00:10:30.896 "read": true, 00:10:30.896 "write": true, 00:10:30.896 "unmap": true, 00:10:30.896 "flush": true, 00:10:30.896 "reset": true, 00:10:30.896 "nvme_admin": false, 00:10:30.896 "nvme_io": false, 00:10:30.896 "nvme_io_md": false, 00:10:30.896 "write_zeroes": true, 00:10:30.897 "zcopy": true, 00:10:30.897 "get_zone_info": false, 00:10:30.897 "zone_management": false, 00:10:30.897 "zone_append": false, 00:10:30.897 "compare": false, 00:10:30.897 "compare_and_write": false, 00:10:30.897 "abort": true, 00:10:30.897 "seek_hole": false, 00:10:30.897 "seek_data": false, 00:10:30.897 "copy": true, 00:10:30.897 "nvme_iov_md": false 00:10:30.897 }, 00:10:30.897 "memory_domains": [ 00:10:30.897 { 00:10:30.897 "dma_device_id": "system", 00:10:30.897 "dma_device_type": 1 00:10:30.897 }, 00:10:30.897 { 00:10:30.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.897 "dma_device_type": 2 00:10:30.897 } 00:10:30.897 ], 00:10:30.897 "driver_specific": {} 00:10:30.897 } 00:10:30.897 ] 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.897 "name": "Existed_Raid", 00:10:30.897 "uuid": "58c6fa5e-1e9f-42f4-b783-d63663393c3d", 00:10:30.897 "strip_size_kb": 64, 00:10:30.897 "state": "configuring", 00:10:30.897 "raid_level": "raid0", 00:10:30.897 "superblock": true, 00:10:30.897 "num_base_bdevs": 4, 00:10:30.897 "num_base_bdevs_discovered": 1, 00:10:30.897 "num_base_bdevs_operational": 4, 00:10:30.897 "base_bdevs_list": [ 00:10:30.897 { 00:10:30.897 "name": "BaseBdev1", 00:10:30.897 "uuid": "275e826b-1d5b-439c-86de-8b2779040be7", 00:10:30.897 "is_configured": true, 00:10:30.897 "data_offset": 2048, 00:10:30.897 "data_size": 63488 00:10:30.897 }, 00:10:30.897 { 00:10:30.897 "name": "BaseBdev2", 00:10:30.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.897 "is_configured": false, 00:10:30.897 "data_offset": 0, 00:10:30.897 "data_size": 0 00:10:30.897 }, 00:10:30.897 { 00:10:30.897 "name": "BaseBdev3", 00:10:30.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.897 "is_configured": false, 00:10:30.897 "data_offset": 0, 00:10:30.897 "data_size": 0 00:10:30.897 }, 00:10:30.897 { 00:10:30.897 "name": "BaseBdev4", 00:10:30.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.897 "is_configured": false, 00:10:30.897 "data_offset": 0, 00:10:30.897 "data_size": 0 00:10:30.897 } 00:10:30.897 ] 00:10:30.897 }' 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.897 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.467 [2024-10-05 08:47:07.640933] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.467 [2024-10-05 08:47:07.641065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.467 [2024-10-05 08:47:07.652984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.467 [2024-10-05 08:47:07.655055] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.467 [2024-10-05 08:47:07.655128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.467 [2024-10-05 08:47:07.655167] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.467 [2024-10-05 08:47:07.655191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.467 [2024-10-05 08:47:07.655210] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.467 [2024-10-05 08:47:07.655230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.467 "name": "Existed_Raid", 00:10:31.467 "uuid": "08913191-67f5-4575-91cf-41bb8a748a92", 00:10:31.467 "strip_size_kb": 64, 00:10:31.467 "state": "configuring", 00:10:31.467 "raid_level": "raid0", 00:10:31.467 "superblock": true, 00:10:31.467 "num_base_bdevs": 4, 00:10:31.467 "num_base_bdevs_discovered": 1, 00:10:31.467 "num_base_bdevs_operational": 4, 00:10:31.467 "base_bdevs_list": [ 00:10:31.467 { 00:10:31.467 "name": "BaseBdev1", 00:10:31.467 "uuid": "275e826b-1d5b-439c-86de-8b2779040be7", 00:10:31.467 "is_configured": true, 00:10:31.467 "data_offset": 2048, 00:10:31.467 "data_size": 63488 00:10:31.467 }, 00:10:31.467 { 00:10:31.467 "name": "BaseBdev2", 00:10:31.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.467 "is_configured": false, 00:10:31.467 "data_offset": 0, 00:10:31.467 "data_size": 0 00:10:31.467 }, 00:10:31.467 { 00:10:31.467 "name": "BaseBdev3", 00:10:31.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.467 "is_configured": false, 00:10:31.467 "data_offset": 0, 00:10:31.467 "data_size": 0 00:10:31.467 }, 00:10:31.467 { 00:10:31.467 "name": "BaseBdev4", 00:10:31.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.467 "is_configured": false, 00:10:31.467 "data_offset": 0, 00:10:31.467 "data_size": 0 00:10:31.467 } 00:10:31.467 ] 00:10:31.467 }' 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.467 08:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.727 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.727 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.727 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.727 [2024-10-05 08:47:08.112250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.727 BaseBdev2 00:10:31.727 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.727 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.728 [ 00:10:31.728 { 00:10:31.728 "name": "BaseBdev2", 00:10:31.728 "aliases": [ 00:10:31.728 "a3132010-cd10-46da-89c4-24c392fb27c5" 00:10:31.728 ], 00:10:31.728 "product_name": "Malloc disk", 00:10:31.728 "block_size": 512, 00:10:31.728 "num_blocks": 65536, 00:10:31.728 "uuid": "a3132010-cd10-46da-89c4-24c392fb27c5", 00:10:31.728 "assigned_rate_limits": { 00:10:31.728 "rw_ios_per_sec": 0, 00:10:31.728 "rw_mbytes_per_sec": 0, 00:10:31.728 "r_mbytes_per_sec": 0, 00:10:31.728 "w_mbytes_per_sec": 0 00:10:31.728 }, 00:10:31.728 "claimed": true, 00:10:31.728 "claim_type": "exclusive_write", 00:10:31.728 "zoned": false, 00:10:31.728 "supported_io_types": { 00:10:31.728 "read": true, 00:10:31.728 "write": true, 00:10:31.728 "unmap": true, 00:10:31.728 "flush": true, 00:10:31.728 "reset": true, 00:10:31.728 "nvme_admin": false, 00:10:31.728 "nvme_io": false, 00:10:31.728 "nvme_io_md": false, 00:10:31.728 "write_zeroes": true, 00:10:31.728 "zcopy": true, 00:10:31.728 "get_zone_info": false, 00:10:31.728 "zone_management": false, 00:10:31.728 "zone_append": false, 00:10:31.728 "compare": false, 00:10:31.728 "compare_and_write": false, 00:10:31.728 "abort": true, 00:10:31.728 "seek_hole": false, 00:10:31.728 "seek_data": false, 00:10:31.728 "copy": true, 00:10:31.728 "nvme_iov_md": false 00:10:31.728 }, 00:10:31.728 "memory_domains": [ 00:10:31.728 { 00:10:31.728 "dma_device_id": "system", 00:10:31.728 "dma_device_type": 1 00:10:31.728 }, 00:10:31.728 { 00:10:31.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.728 "dma_device_type": 2 00:10:31.728 } 00:10:31.728 ], 00:10:31.728 "driver_specific": {} 00:10:31.728 } 00:10:31.728 ] 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.728 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.988 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.988 "name": "Existed_Raid", 00:10:31.988 "uuid": "08913191-67f5-4575-91cf-41bb8a748a92", 00:10:31.988 "strip_size_kb": 64, 00:10:31.988 "state": "configuring", 00:10:31.988 "raid_level": "raid0", 00:10:31.988 "superblock": true, 00:10:31.988 "num_base_bdevs": 4, 00:10:31.988 "num_base_bdevs_discovered": 2, 00:10:31.988 "num_base_bdevs_operational": 4, 00:10:31.988 "base_bdevs_list": [ 00:10:31.988 { 00:10:31.988 "name": "BaseBdev1", 00:10:31.988 "uuid": "275e826b-1d5b-439c-86de-8b2779040be7", 00:10:31.988 "is_configured": true, 00:10:31.988 "data_offset": 2048, 00:10:31.988 "data_size": 63488 00:10:31.988 }, 00:10:31.988 { 00:10:31.988 "name": "BaseBdev2", 00:10:31.988 "uuid": "a3132010-cd10-46da-89c4-24c392fb27c5", 00:10:31.988 "is_configured": true, 00:10:31.988 "data_offset": 2048, 00:10:31.988 "data_size": 63488 00:10:31.988 }, 00:10:31.988 { 00:10:31.988 "name": "BaseBdev3", 00:10:31.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.988 "is_configured": false, 00:10:31.988 "data_offset": 0, 00:10:31.988 "data_size": 0 00:10:31.988 }, 00:10:31.988 { 00:10:31.988 "name": "BaseBdev4", 00:10:31.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.988 "is_configured": false, 00:10:31.988 "data_offset": 0, 00:10:31.988 "data_size": 0 00:10:31.988 } 00:10:31.988 ] 00:10:31.988 }' 00:10:31.988 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.988 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.248 [2024-10-05 08:47:08.615202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.248 BaseBdev3 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.248 [ 00:10:32.248 { 00:10:32.248 "name": "BaseBdev3", 00:10:32.248 "aliases": [ 00:10:32.248 "f7a5b35e-9c15-49f6-8186-a765bea866f8" 00:10:32.248 ], 00:10:32.248 "product_name": "Malloc disk", 00:10:32.248 "block_size": 512, 00:10:32.248 "num_blocks": 65536, 00:10:32.248 "uuid": "f7a5b35e-9c15-49f6-8186-a765bea866f8", 00:10:32.248 "assigned_rate_limits": { 00:10:32.248 "rw_ios_per_sec": 0, 00:10:32.248 "rw_mbytes_per_sec": 0, 00:10:32.248 "r_mbytes_per_sec": 0, 00:10:32.248 "w_mbytes_per_sec": 0 00:10:32.248 }, 00:10:32.248 "claimed": true, 00:10:32.248 "claim_type": "exclusive_write", 00:10:32.248 "zoned": false, 00:10:32.248 "supported_io_types": { 00:10:32.248 "read": true, 00:10:32.248 "write": true, 00:10:32.248 "unmap": true, 00:10:32.248 "flush": true, 00:10:32.248 "reset": true, 00:10:32.248 "nvme_admin": false, 00:10:32.248 "nvme_io": false, 00:10:32.248 "nvme_io_md": false, 00:10:32.248 "write_zeroes": true, 00:10:32.248 "zcopy": true, 00:10:32.248 "get_zone_info": false, 00:10:32.248 "zone_management": false, 00:10:32.248 "zone_append": false, 00:10:32.248 "compare": false, 00:10:32.248 "compare_and_write": false, 00:10:32.248 "abort": true, 00:10:32.248 "seek_hole": false, 00:10:32.248 "seek_data": false, 00:10:32.248 "copy": true, 00:10:32.248 "nvme_iov_md": false 00:10:32.248 }, 00:10:32.248 "memory_domains": [ 00:10:32.248 { 00:10:32.248 "dma_device_id": "system", 00:10:32.248 "dma_device_type": 1 00:10:32.248 }, 00:10:32.248 { 00:10:32.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.248 "dma_device_type": 2 00:10:32.248 } 00:10:32.248 ], 00:10:32.248 "driver_specific": {} 00:10:32.248 } 00:10:32.248 ] 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.248 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.249 "name": "Existed_Raid", 00:10:32.249 "uuid": "08913191-67f5-4575-91cf-41bb8a748a92", 00:10:32.249 "strip_size_kb": 64, 00:10:32.249 "state": "configuring", 00:10:32.249 "raid_level": "raid0", 00:10:32.249 "superblock": true, 00:10:32.249 "num_base_bdevs": 4, 00:10:32.249 "num_base_bdevs_discovered": 3, 00:10:32.249 "num_base_bdevs_operational": 4, 00:10:32.249 "base_bdevs_list": [ 00:10:32.249 { 00:10:32.249 "name": "BaseBdev1", 00:10:32.249 "uuid": "275e826b-1d5b-439c-86de-8b2779040be7", 00:10:32.249 "is_configured": true, 00:10:32.249 "data_offset": 2048, 00:10:32.249 "data_size": 63488 00:10:32.249 }, 00:10:32.249 { 00:10:32.249 "name": "BaseBdev2", 00:10:32.249 "uuid": "a3132010-cd10-46da-89c4-24c392fb27c5", 00:10:32.249 "is_configured": true, 00:10:32.249 "data_offset": 2048, 00:10:32.249 "data_size": 63488 00:10:32.249 }, 00:10:32.249 { 00:10:32.249 "name": "BaseBdev3", 00:10:32.249 "uuid": "f7a5b35e-9c15-49f6-8186-a765bea866f8", 00:10:32.249 "is_configured": true, 00:10:32.249 "data_offset": 2048, 00:10:32.249 "data_size": 63488 00:10:32.249 }, 00:10:32.249 { 00:10:32.249 "name": "BaseBdev4", 00:10:32.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.249 "is_configured": false, 00:10:32.249 "data_offset": 0, 00:10:32.249 "data_size": 0 00:10:32.249 } 00:10:32.249 ] 00:10:32.249 }' 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.249 08:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.818 [2024-10-05 08:47:09.127654] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:32.818 [2024-10-05 08:47:09.127947] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:32.818 [2024-10-05 08:47:09.127986] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:32.818 BaseBdev4 00:10:32.818 [2024-10-05 08:47:09.128314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:32.818 [2024-10-05 08:47:09.128482] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:32.818 [2024-10-05 08:47:09.128496] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:32.818 [2024-10-05 08:47:09.128636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.818 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.818 [ 00:10:32.818 { 00:10:32.818 "name": "BaseBdev4", 00:10:32.818 "aliases": [ 00:10:32.818 "e3cfd404-0b1a-4093-b637-44fab875c0e5" 00:10:32.818 ], 00:10:32.818 "product_name": "Malloc disk", 00:10:32.818 "block_size": 512, 00:10:32.818 "num_blocks": 65536, 00:10:32.818 "uuid": "e3cfd404-0b1a-4093-b637-44fab875c0e5", 00:10:32.818 "assigned_rate_limits": { 00:10:32.818 "rw_ios_per_sec": 0, 00:10:32.818 "rw_mbytes_per_sec": 0, 00:10:32.818 "r_mbytes_per_sec": 0, 00:10:32.818 "w_mbytes_per_sec": 0 00:10:32.818 }, 00:10:32.818 "claimed": true, 00:10:32.818 "claim_type": "exclusive_write", 00:10:32.818 "zoned": false, 00:10:32.818 "supported_io_types": { 00:10:32.818 "read": true, 00:10:32.818 "write": true, 00:10:32.818 "unmap": true, 00:10:32.818 "flush": true, 00:10:32.818 "reset": true, 00:10:32.818 "nvme_admin": false, 00:10:32.818 "nvme_io": false, 00:10:32.818 "nvme_io_md": false, 00:10:32.818 "write_zeroes": true, 00:10:32.818 "zcopy": true, 00:10:32.818 "get_zone_info": false, 00:10:32.818 "zone_management": false, 00:10:32.818 "zone_append": false, 00:10:32.818 "compare": false, 00:10:32.818 "compare_and_write": false, 00:10:32.818 "abort": true, 00:10:32.819 "seek_hole": false, 00:10:32.819 "seek_data": false, 00:10:32.819 "copy": true, 00:10:32.819 "nvme_iov_md": false 00:10:32.819 }, 00:10:32.819 "memory_domains": [ 00:10:32.819 { 00:10:32.819 "dma_device_id": "system", 00:10:32.819 "dma_device_type": 1 00:10:32.819 }, 00:10:32.819 { 00:10:32.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.819 "dma_device_type": 2 00:10:32.819 } 00:10:32.819 ], 00:10:32.819 "driver_specific": {} 00:10:32.819 } 00:10:32.819 ] 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.819 "name": "Existed_Raid", 00:10:32.819 "uuid": "08913191-67f5-4575-91cf-41bb8a748a92", 00:10:32.819 "strip_size_kb": 64, 00:10:32.819 "state": "online", 00:10:32.819 "raid_level": "raid0", 00:10:32.819 "superblock": true, 00:10:32.819 "num_base_bdevs": 4, 00:10:32.819 "num_base_bdevs_discovered": 4, 00:10:32.819 "num_base_bdevs_operational": 4, 00:10:32.819 "base_bdevs_list": [ 00:10:32.819 { 00:10:32.819 "name": "BaseBdev1", 00:10:32.819 "uuid": "275e826b-1d5b-439c-86de-8b2779040be7", 00:10:32.819 "is_configured": true, 00:10:32.819 "data_offset": 2048, 00:10:32.819 "data_size": 63488 00:10:32.819 }, 00:10:32.819 { 00:10:32.819 "name": "BaseBdev2", 00:10:32.819 "uuid": "a3132010-cd10-46da-89c4-24c392fb27c5", 00:10:32.819 "is_configured": true, 00:10:32.819 "data_offset": 2048, 00:10:32.819 "data_size": 63488 00:10:32.819 }, 00:10:32.819 { 00:10:32.819 "name": "BaseBdev3", 00:10:32.819 "uuid": "f7a5b35e-9c15-49f6-8186-a765bea866f8", 00:10:32.819 "is_configured": true, 00:10:32.819 "data_offset": 2048, 00:10:32.819 "data_size": 63488 00:10:32.819 }, 00:10:32.819 { 00:10:32.819 "name": "BaseBdev4", 00:10:32.819 "uuid": "e3cfd404-0b1a-4093-b637-44fab875c0e5", 00:10:32.819 "is_configured": true, 00:10:32.819 "data_offset": 2048, 00:10:32.819 "data_size": 63488 00:10:32.819 } 00:10:32.819 ] 00:10:32.819 }' 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.819 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.390 [2024-10-05 08:47:09.615222] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.390 "name": "Existed_Raid", 00:10:33.390 "aliases": [ 00:10:33.390 "08913191-67f5-4575-91cf-41bb8a748a92" 00:10:33.390 ], 00:10:33.390 "product_name": "Raid Volume", 00:10:33.390 "block_size": 512, 00:10:33.390 "num_blocks": 253952, 00:10:33.390 "uuid": "08913191-67f5-4575-91cf-41bb8a748a92", 00:10:33.390 "assigned_rate_limits": { 00:10:33.390 "rw_ios_per_sec": 0, 00:10:33.390 "rw_mbytes_per_sec": 0, 00:10:33.390 "r_mbytes_per_sec": 0, 00:10:33.390 "w_mbytes_per_sec": 0 00:10:33.390 }, 00:10:33.390 "claimed": false, 00:10:33.390 "zoned": false, 00:10:33.390 "supported_io_types": { 00:10:33.390 "read": true, 00:10:33.390 "write": true, 00:10:33.390 "unmap": true, 00:10:33.390 "flush": true, 00:10:33.390 "reset": true, 00:10:33.390 "nvme_admin": false, 00:10:33.390 "nvme_io": false, 00:10:33.390 "nvme_io_md": false, 00:10:33.390 "write_zeroes": true, 00:10:33.390 "zcopy": false, 00:10:33.390 "get_zone_info": false, 00:10:33.390 "zone_management": false, 00:10:33.390 "zone_append": false, 00:10:33.390 "compare": false, 00:10:33.390 "compare_and_write": false, 00:10:33.390 "abort": false, 00:10:33.390 "seek_hole": false, 00:10:33.390 "seek_data": false, 00:10:33.390 "copy": false, 00:10:33.390 "nvme_iov_md": false 00:10:33.390 }, 00:10:33.390 "memory_domains": [ 00:10:33.390 { 00:10:33.390 "dma_device_id": "system", 00:10:33.390 "dma_device_type": 1 00:10:33.390 }, 00:10:33.390 { 00:10:33.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.390 "dma_device_type": 2 00:10:33.390 }, 00:10:33.390 { 00:10:33.390 "dma_device_id": "system", 00:10:33.390 "dma_device_type": 1 00:10:33.390 }, 00:10:33.390 { 00:10:33.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.390 "dma_device_type": 2 00:10:33.390 }, 00:10:33.390 { 00:10:33.390 "dma_device_id": "system", 00:10:33.390 "dma_device_type": 1 00:10:33.390 }, 00:10:33.390 { 00:10:33.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.390 "dma_device_type": 2 00:10:33.390 }, 00:10:33.390 { 00:10:33.390 "dma_device_id": "system", 00:10:33.390 "dma_device_type": 1 00:10:33.390 }, 00:10:33.390 { 00:10:33.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.390 "dma_device_type": 2 00:10:33.390 } 00:10:33.390 ], 00:10:33.390 "driver_specific": { 00:10:33.390 "raid": { 00:10:33.390 "uuid": "08913191-67f5-4575-91cf-41bb8a748a92", 00:10:33.390 "strip_size_kb": 64, 00:10:33.390 "state": "online", 00:10:33.390 "raid_level": "raid0", 00:10:33.390 "superblock": true, 00:10:33.390 "num_base_bdevs": 4, 00:10:33.390 "num_base_bdevs_discovered": 4, 00:10:33.390 "num_base_bdevs_operational": 4, 00:10:33.390 "base_bdevs_list": [ 00:10:33.390 { 00:10:33.390 "name": "BaseBdev1", 00:10:33.390 "uuid": "275e826b-1d5b-439c-86de-8b2779040be7", 00:10:33.390 "is_configured": true, 00:10:33.390 "data_offset": 2048, 00:10:33.390 "data_size": 63488 00:10:33.390 }, 00:10:33.390 { 00:10:33.390 "name": "BaseBdev2", 00:10:33.390 "uuid": "a3132010-cd10-46da-89c4-24c392fb27c5", 00:10:33.390 "is_configured": true, 00:10:33.390 "data_offset": 2048, 00:10:33.390 "data_size": 63488 00:10:33.390 }, 00:10:33.390 { 00:10:33.390 "name": "BaseBdev3", 00:10:33.390 "uuid": "f7a5b35e-9c15-49f6-8186-a765bea866f8", 00:10:33.390 "is_configured": true, 00:10:33.390 "data_offset": 2048, 00:10:33.390 "data_size": 63488 00:10:33.390 }, 00:10:33.390 { 00:10:33.390 "name": "BaseBdev4", 00:10:33.390 "uuid": "e3cfd404-0b1a-4093-b637-44fab875c0e5", 00:10:33.390 "is_configured": true, 00:10:33.390 "data_offset": 2048, 00:10:33.390 "data_size": 63488 00:10:33.390 } 00:10:33.390 ] 00:10:33.390 } 00:10:33.390 } 00:10:33.390 }' 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:33.390 BaseBdev2 00:10:33.390 BaseBdev3 00:10:33.390 BaseBdev4' 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.390 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.651 08:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.651 [2024-10-05 08:47:09.942323] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:33.651 [2024-10-05 08:47:09.942357] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.651 [2024-10-05 08:47:09.942412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.651 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.651 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:33.651 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:33.651 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.651 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:33.651 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:33.651 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:33.651 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.652 "name": "Existed_Raid", 00:10:33.652 "uuid": "08913191-67f5-4575-91cf-41bb8a748a92", 00:10:33.652 "strip_size_kb": 64, 00:10:33.652 "state": "offline", 00:10:33.652 "raid_level": "raid0", 00:10:33.652 "superblock": true, 00:10:33.652 "num_base_bdevs": 4, 00:10:33.652 "num_base_bdevs_discovered": 3, 00:10:33.652 "num_base_bdevs_operational": 3, 00:10:33.652 "base_bdevs_list": [ 00:10:33.652 { 00:10:33.652 "name": null, 00:10:33.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.652 "is_configured": false, 00:10:33.652 "data_offset": 0, 00:10:33.652 "data_size": 63488 00:10:33.652 }, 00:10:33.652 { 00:10:33.652 "name": "BaseBdev2", 00:10:33.652 "uuid": "a3132010-cd10-46da-89c4-24c392fb27c5", 00:10:33.652 "is_configured": true, 00:10:33.652 "data_offset": 2048, 00:10:33.652 "data_size": 63488 00:10:33.652 }, 00:10:33.652 { 00:10:33.652 "name": "BaseBdev3", 00:10:33.652 "uuid": "f7a5b35e-9c15-49f6-8186-a765bea866f8", 00:10:33.652 "is_configured": true, 00:10:33.652 "data_offset": 2048, 00:10:33.652 "data_size": 63488 00:10:33.652 }, 00:10:33.652 { 00:10:33.652 "name": "BaseBdev4", 00:10:33.652 "uuid": "e3cfd404-0b1a-4093-b637-44fab875c0e5", 00:10:33.652 "is_configured": true, 00:10:33.652 "data_offset": 2048, 00:10:33.652 "data_size": 63488 00:10:33.652 } 00:10:33.652 ] 00:10:33.652 }' 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.652 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.221 [2024-10-05 08:47:10.539157] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:34.221 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.222 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:34.222 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.222 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.222 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.222 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.222 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:34.222 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.222 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:34.222 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.222 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.222 [2024-10-05 08:47:10.684241] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:34.481 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.481 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:34.481 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.481 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:34.481 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.481 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.481 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.481 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.481 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:34.481 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.481 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:34.482 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.482 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.482 [2024-10-05 08:47:10.840705] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:34.482 [2024-10-05 08:47:10.840812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:34.482 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.482 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:34.482 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.482 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.482 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.482 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.482 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:34.742 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.742 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:34.742 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:34.742 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:34.742 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:34.742 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.742 08:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:34.742 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.742 08:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.742 BaseBdev2 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.742 [ 00:10:34.742 { 00:10:34.742 "name": "BaseBdev2", 00:10:34.742 "aliases": [ 00:10:34.742 "94e8438c-0693-44f4-8f0c-1af99e4492c3" 00:10:34.742 ], 00:10:34.742 "product_name": "Malloc disk", 00:10:34.742 "block_size": 512, 00:10:34.742 "num_blocks": 65536, 00:10:34.742 "uuid": "94e8438c-0693-44f4-8f0c-1af99e4492c3", 00:10:34.742 "assigned_rate_limits": { 00:10:34.742 "rw_ios_per_sec": 0, 00:10:34.742 "rw_mbytes_per_sec": 0, 00:10:34.742 "r_mbytes_per_sec": 0, 00:10:34.742 "w_mbytes_per_sec": 0 00:10:34.742 }, 00:10:34.742 "claimed": false, 00:10:34.742 "zoned": false, 00:10:34.742 "supported_io_types": { 00:10:34.742 "read": true, 00:10:34.742 "write": true, 00:10:34.742 "unmap": true, 00:10:34.742 "flush": true, 00:10:34.742 "reset": true, 00:10:34.742 "nvme_admin": false, 00:10:34.742 "nvme_io": false, 00:10:34.742 "nvme_io_md": false, 00:10:34.742 "write_zeroes": true, 00:10:34.742 "zcopy": true, 00:10:34.742 "get_zone_info": false, 00:10:34.742 "zone_management": false, 00:10:34.742 "zone_append": false, 00:10:34.742 "compare": false, 00:10:34.742 "compare_and_write": false, 00:10:34.742 "abort": true, 00:10:34.742 "seek_hole": false, 00:10:34.742 "seek_data": false, 00:10:34.742 "copy": true, 00:10:34.742 "nvme_iov_md": false 00:10:34.742 }, 00:10:34.742 "memory_domains": [ 00:10:34.742 { 00:10:34.742 "dma_device_id": "system", 00:10:34.742 "dma_device_type": 1 00:10:34.742 }, 00:10:34.742 { 00:10:34.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.742 "dma_device_type": 2 00:10:34.742 } 00:10:34.742 ], 00:10:34.742 "driver_specific": {} 00:10:34.742 } 00:10:34.742 ] 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.742 BaseBdev3 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.742 [ 00:10:34.742 { 00:10:34.742 "name": "BaseBdev3", 00:10:34.742 "aliases": [ 00:10:34.742 "8f17ad16-c1f6-4a9d-b185-2dd0b8978a4e" 00:10:34.742 ], 00:10:34.742 "product_name": "Malloc disk", 00:10:34.742 "block_size": 512, 00:10:34.742 "num_blocks": 65536, 00:10:34.742 "uuid": "8f17ad16-c1f6-4a9d-b185-2dd0b8978a4e", 00:10:34.742 "assigned_rate_limits": { 00:10:34.742 "rw_ios_per_sec": 0, 00:10:34.742 "rw_mbytes_per_sec": 0, 00:10:34.742 "r_mbytes_per_sec": 0, 00:10:34.742 "w_mbytes_per_sec": 0 00:10:34.742 }, 00:10:34.742 "claimed": false, 00:10:34.742 "zoned": false, 00:10:34.742 "supported_io_types": { 00:10:34.742 "read": true, 00:10:34.742 "write": true, 00:10:34.742 "unmap": true, 00:10:34.742 "flush": true, 00:10:34.742 "reset": true, 00:10:34.742 "nvme_admin": false, 00:10:34.742 "nvme_io": false, 00:10:34.742 "nvme_io_md": false, 00:10:34.742 "write_zeroes": true, 00:10:34.742 "zcopy": true, 00:10:34.742 "get_zone_info": false, 00:10:34.742 "zone_management": false, 00:10:34.742 "zone_append": false, 00:10:34.742 "compare": false, 00:10:34.742 "compare_and_write": false, 00:10:34.742 "abort": true, 00:10:34.742 "seek_hole": false, 00:10:34.742 "seek_data": false, 00:10:34.742 "copy": true, 00:10:34.742 "nvme_iov_md": false 00:10:34.742 }, 00:10:34.742 "memory_domains": [ 00:10:34.742 { 00:10:34.742 "dma_device_id": "system", 00:10:34.742 "dma_device_type": 1 00:10:34.742 }, 00:10:34.742 { 00:10:34.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.742 "dma_device_type": 2 00:10:34.742 } 00:10:34.742 ], 00:10:34.742 "driver_specific": {} 00:10:34.742 } 00:10:34.742 ] 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.742 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.002 BaseBdev4 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:35.002 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.003 [ 00:10:35.003 { 00:10:35.003 "name": "BaseBdev4", 00:10:35.003 "aliases": [ 00:10:35.003 "21a0be45-dbe5-45e3-b74b-659bedc55ac8" 00:10:35.003 ], 00:10:35.003 "product_name": "Malloc disk", 00:10:35.003 "block_size": 512, 00:10:35.003 "num_blocks": 65536, 00:10:35.003 "uuid": "21a0be45-dbe5-45e3-b74b-659bedc55ac8", 00:10:35.003 "assigned_rate_limits": { 00:10:35.003 "rw_ios_per_sec": 0, 00:10:35.003 "rw_mbytes_per_sec": 0, 00:10:35.003 "r_mbytes_per_sec": 0, 00:10:35.003 "w_mbytes_per_sec": 0 00:10:35.003 }, 00:10:35.003 "claimed": false, 00:10:35.003 "zoned": false, 00:10:35.003 "supported_io_types": { 00:10:35.003 "read": true, 00:10:35.003 "write": true, 00:10:35.003 "unmap": true, 00:10:35.003 "flush": true, 00:10:35.003 "reset": true, 00:10:35.003 "nvme_admin": false, 00:10:35.003 "nvme_io": false, 00:10:35.003 "nvme_io_md": false, 00:10:35.003 "write_zeroes": true, 00:10:35.003 "zcopy": true, 00:10:35.003 "get_zone_info": false, 00:10:35.003 "zone_management": false, 00:10:35.003 "zone_append": false, 00:10:35.003 "compare": false, 00:10:35.003 "compare_and_write": false, 00:10:35.003 "abort": true, 00:10:35.003 "seek_hole": false, 00:10:35.003 "seek_data": false, 00:10:35.003 "copy": true, 00:10:35.003 "nvme_iov_md": false 00:10:35.003 }, 00:10:35.003 "memory_domains": [ 00:10:35.003 { 00:10:35.003 "dma_device_id": "system", 00:10:35.003 "dma_device_type": 1 00:10:35.003 }, 00:10:35.003 { 00:10:35.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.003 "dma_device_type": 2 00:10:35.003 } 00:10:35.003 ], 00:10:35.003 "driver_specific": {} 00:10:35.003 } 00:10:35.003 ] 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.003 [2024-10-05 08:47:11.266757] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.003 [2024-10-05 08:47:11.266859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.003 [2024-10-05 08:47:11.266903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.003 [2024-10-05 08:47:11.268997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.003 [2024-10-05 08:47:11.269092] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.003 "name": "Existed_Raid", 00:10:35.003 "uuid": "d2720341-a011-4f6d-b185-983349514320", 00:10:35.003 "strip_size_kb": 64, 00:10:35.003 "state": "configuring", 00:10:35.003 "raid_level": "raid0", 00:10:35.003 "superblock": true, 00:10:35.003 "num_base_bdevs": 4, 00:10:35.003 "num_base_bdevs_discovered": 3, 00:10:35.003 "num_base_bdevs_operational": 4, 00:10:35.003 "base_bdevs_list": [ 00:10:35.003 { 00:10:35.003 "name": "BaseBdev1", 00:10:35.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.003 "is_configured": false, 00:10:35.003 "data_offset": 0, 00:10:35.003 "data_size": 0 00:10:35.003 }, 00:10:35.003 { 00:10:35.003 "name": "BaseBdev2", 00:10:35.003 "uuid": "94e8438c-0693-44f4-8f0c-1af99e4492c3", 00:10:35.003 "is_configured": true, 00:10:35.003 "data_offset": 2048, 00:10:35.003 "data_size": 63488 00:10:35.003 }, 00:10:35.003 { 00:10:35.003 "name": "BaseBdev3", 00:10:35.003 "uuid": "8f17ad16-c1f6-4a9d-b185-2dd0b8978a4e", 00:10:35.003 "is_configured": true, 00:10:35.003 "data_offset": 2048, 00:10:35.003 "data_size": 63488 00:10:35.003 }, 00:10:35.003 { 00:10:35.003 "name": "BaseBdev4", 00:10:35.003 "uuid": "21a0be45-dbe5-45e3-b74b-659bedc55ac8", 00:10:35.003 "is_configured": true, 00:10:35.003 "data_offset": 2048, 00:10:35.003 "data_size": 63488 00:10:35.003 } 00:10:35.003 ] 00:10:35.003 }' 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.003 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.262 [2024-10-05 08:47:11.718006] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.262 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.522 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.522 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.522 "name": "Existed_Raid", 00:10:35.522 "uuid": "d2720341-a011-4f6d-b185-983349514320", 00:10:35.522 "strip_size_kb": 64, 00:10:35.522 "state": "configuring", 00:10:35.522 "raid_level": "raid0", 00:10:35.522 "superblock": true, 00:10:35.522 "num_base_bdevs": 4, 00:10:35.522 "num_base_bdevs_discovered": 2, 00:10:35.522 "num_base_bdevs_operational": 4, 00:10:35.522 "base_bdevs_list": [ 00:10:35.522 { 00:10:35.522 "name": "BaseBdev1", 00:10:35.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.522 "is_configured": false, 00:10:35.522 "data_offset": 0, 00:10:35.522 "data_size": 0 00:10:35.522 }, 00:10:35.522 { 00:10:35.522 "name": null, 00:10:35.522 "uuid": "94e8438c-0693-44f4-8f0c-1af99e4492c3", 00:10:35.522 "is_configured": false, 00:10:35.522 "data_offset": 0, 00:10:35.522 "data_size": 63488 00:10:35.522 }, 00:10:35.522 { 00:10:35.522 "name": "BaseBdev3", 00:10:35.522 "uuid": "8f17ad16-c1f6-4a9d-b185-2dd0b8978a4e", 00:10:35.522 "is_configured": true, 00:10:35.522 "data_offset": 2048, 00:10:35.522 "data_size": 63488 00:10:35.522 }, 00:10:35.522 { 00:10:35.522 "name": "BaseBdev4", 00:10:35.522 "uuid": "21a0be45-dbe5-45e3-b74b-659bedc55ac8", 00:10:35.522 "is_configured": true, 00:10:35.522 "data_offset": 2048, 00:10:35.522 "data_size": 63488 00:10:35.522 } 00:10:35.522 ] 00:10:35.522 }' 00:10:35.522 08:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.522 08:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.782 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.782 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:35.782 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.782 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.782 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.782 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:35.782 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.782 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.782 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.075 BaseBdev1 00:10:36.075 [2024-10-05 08:47:12.263281] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.075 [ 00:10:36.075 { 00:10:36.075 "name": "BaseBdev1", 00:10:36.075 "aliases": [ 00:10:36.075 "8edfe56c-46e7-4f7d-a971-ef1a2bbb48bc" 00:10:36.075 ], 00:10:36.075 "product_name": "Malloc disk", 00:10:36.075 "block_size": 512, 00:10:36.075 "num_blocks": 65536, 00:10:36.075 "uuid": "8edfe56c-46e7-4f7d-a971-ef1a2bbb48bc", 00:10:36.075 "assigned_rate_limits": { 00:10:36.075 "rw_ios_per_sec": 0, 00:10:36.075 "rw_mbytes_per_sec": 0, 00:10:36.075 "r_mbytes_per_sec": 0, 00:10:36.075 "w_mbytes_per_sec": 0 00:10:36.075 }, 00:10:36.075 "claimed": true, 00:10:36.075 "claim_type": "exclusive_write", 00:10:36.075 "zoned": false, 00:10:36.075 "supported_io_types": { 00:10:36.075 "read": true, 00:10:36.075 "write": true, 00:10:36.075 "unmap": true, 00:10:36.075 "flush": true, 00:10:36.075 "reset": true, 00:10:36.075 "nvme_admin": false, 00:10:36.075 "nvme_io": false, 00:10:36.075 "nvme_io_md": false, 00:10:36.075 "write_zeroes": true, 00:10:36.075 "zcopy": true, 00:10:36.075 "get_zone_info": false, 00:10:36.075 "zone_management": false, 00:10:36.075 "zone_append": false, 00:10:36.075 "compare": false, 00:10:36.075 "compare_and_write": false, 00:10:36.075 "abort": true, 00:10:36.075 "seek_hole": false, 00:10:36.075 "seek_data": false, 00:10:36.075 "copy": true, 00:10:36.075 "nvme_iov_md": false 00:10:36.075 }, 00:10:36.075 "memory_domains": [ 00:10:36.075 { 00:10:36.075 "dma_device_id": "system", 00:10:36.075 "dma_device_type": 1 00:10:36.075 }, 00:10:36.075 { 00:10:36.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.075 "dma_device_type": 2 00:10:36.075 } 00:10:36.075 ], 00:10:36.075 "driver_specific": {} 00:10:36.075 } 00:10:36.075 ] 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.075 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.075 "name": "Existed_Raid", 00:10:36.075 "uuid": "d2720341-a011-4f6d-b185-983349514320", 00:10:36.075 "strip_size_kb": 64, 00:10:36.075 "state": "configuring", 00:10:36.075 "raid_level": "raid0", 00:10:36.075 "superblock": true, 00:10:36.075 "num_base_bdevs": 4, 00:10:36.075 "num_base_bdevs_discovered": 3, 00:10:36.075 "num_base_bdevs_operational": 4, 00:10:36.075 "base_bdevs_list": [ 00:10:36.075 { 00:10:36.075 "name": "BaseBdev1", 00:10:36.075 "uuid": "8edfe56c-46e7-4f7d-a971-ef1a2bbb48bc", 00:10:36.076 "is_configured": true, 00:10:36.076 "data_offset": 2048, 00:10:36.076 "data_size": 63488 00:10:36.076 }, 00:10:36.076 { 00:10:36.076 "name": null, 00:10:36.076 "uuid": "94e8438c-0693-44f4-8f0c-1af99e4492c3", 00:10:36.076 "is_configured": false, 00:10:36.076 "data_offset": 0, 00:10:36.076 "data_size": 63488 00:10:36.076 }, 00:10:36.076 { 00:10:36.076 "name": "BaseBdev3", 00:10:36.076 "uuid": "8f17ad16-c1f6-4a9d-b185-2dd0b8978a4e", 00:10:36.076 "is_configured": true, 00:10:36.076 "data_offset": 2048, 00:10:36.076 "data_size": 63488 00:10:36.076 }, 00:10:36.076 { 00:10:36.076 "name": "BaseBdev4", 00:10:36.076 "uuid": "21a0be45-dbe5-45e3-b74b-659bedc55ac8", 00:10:36.076 "is_configured": true, 00:10:36.076 "data_offset": 2048, 00:10:36.076 "data_size": 63488 00:10:36.076 } 00:10:36.076 ] 00:10:36.076 }' 00:10:36.076 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.076 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.338 [2024-10-05 08:47:12.794405] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.338 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.598 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.598 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.598 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.598 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.598 "name": "Existed_Raid", 00:10:36.598 "uuid": "d2720341-a011-4f6d-b185-983349514320", 00:10:36.598 "strip_size_kb": 64, 00:10:36.598 "state": "configuring", 00:10:36.598 "raid_level": "raid0", 00:10:36.598 "superblock": true, 00:10:36.598 "num_base_bdevs": 4, 00:10:36.598 "num_base_bdevs_discovered": 2, 00:10:36.598 "num_base_bdevs_operational": 4, 00:10:36.598 "base_bdevs_list": [ 00:10:36.598 { 00:10:36.598 "name": "BaseBdev1", 00:10:36.598 "uuid": "8edfe56c-46e7-4f7d-a971-ef1a2bbb48bc", 00:10:36.598 "is_configured": true, 00:10:36.598 "data_offset": 2048, 00:10:36.598 "data_size": 63488 00:10:36.598 }, 00:10:36.598 { 00:10:36.598 "name": null, 00:10:36.598 "uuid": "94e8438c-0693-44f4-8f0c-1af99e4492c3", 00:10:36.598 "is_configured": false, 00:10:36.598 "data_offset": 0, 00:10:36.598 "data_size": 63488 00:10:36.598 }, 00:10:36.598 { 00:10:36.598 "name": null, 00:10:36.598 "uuid": "8f17ad16-c1f6-4a9d-b185-2dd0b8978a4e", 00:10:36.598 "is_configured": false, 00:10:36.598 "data_offset": 0, 00:10:36.598 "data_size": 63488 00:10:36.598 }, 00:10:36.598 { 00:10:36.598 "name": "BaseBdev4", 00:10:36.598 "uuid": "21a0be45-dbe5-45e3-b74b-659bedc55ac8", 00:10:36.598 "is_configured": true, 00:10:36.598 "data_offset": 2048, 00:10:36.598 "data_size": 63488 00:10:36.598 } 00:10:36.598 ] 00:10:36.598 }' 00:10:36.598 08:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.598 08:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.858 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:36.858 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.858 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.858 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.858 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.858 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:36.858 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:36.858 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.858 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.858 [2024-10-05 08:47:13.245679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.858 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.859 "name": "Existed_Raid", 00:10:36.859 "uuid": "d2720341-a011-4f6d-b185-983349514320", 00:10:36.859 "strip_size_kb": 64, 00:10:36.859 "state": "configuring", 00:10:36.859 "raid_level": "raid0", 00:10:36.859 "superblock": true, 00:10:36.859 "num_base_bdevs": 4, 00:10:36.859 "num_base_bdevs_discovered": 3, 00:10:36.859 "num_base_bdevs_operational": 4, 00:10:36.859 "base_bdevs_list": [ 00:10:36.859 { 00:10:36.859 "name": "BaseBdev1", 00:10:36.859 "uuid": "8edfe56c-46e7-4f7d-a971-ef1a2bbb48bc", 00:10:36.859 "is_configured": true, 00:10:36.859 "data_offset": 2048, 00:10:36.859 "data_size": 63488 00:10:36.859 }, 00:10:36.859 { 00:10:36.859 "name": null, 00:10:36.859 "uuid": "94e8438c-0693-44f4-8f0c-1af99e4492c3", 00:10:36.859 "is_configured": false, 00:10:36.859 "data_offset": 0, 00:10:36.859 "data_size": 63488 00:10:36.859 }, 00:10:36.859 { 00:10:36.859 "name": "BaseBdev3", 00:10:36.859 "uuid": "8f17ad16-c1f6-4a9d-b185-2dd0b8978a4e", 00:10:36.859 "is_configured": true, 00:10:36.859 "data_offset": 2048, 00:10:36.859 "data_size": 63488 00:10:36.859 }, 00:10:36.859 { 00:10:36.859 "name": "BaseBdev4", 00:10:36.859 "uuid": "21a0be45-dbe5-45e3-b74b-659bedc55ac8", 00:10:36.859 "is_configured": true, 00:10:36.859 "data_offset": 2048, 00:10:36.859 "data_size": 63488 00:10:36.859 } 00:10:36.859 ] 00:10:36.859 }' 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.859 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.428 [2024-10-05 08:47:13.692942] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.428 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.429 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.429 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.429 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.429 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.429 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.429 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.429 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.429 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.429 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.429 "name": "Existed_Raid", 00:10:37.429 "uuid": "d2720341-a011-4f6d-b185-983349514320", 00:10:37.429 "strip_size_kb": 64, 00:10:37.429 "state": "configuring", 00:10:37.429 "raid_level": "raid0", 00:10:37.429 "superblock": true, 00:10:37.429 "num_base_bdevs": 4, 00:10:37.429 "num_base_bdevs_discovered": 2, 00:10:37.429 "num_base_bdevs_operational": 4, 00:10:37.429 "base_bdevs_list": [ 00:10:37.429 { 00:10:37.429 "name": null, 00:10:37.429 "uuid": "8edfe56c-46e7-4f7d-a971-ef1a2bbb48bc", 00:10:37.429 "is_configured": false, 00:10:37.429 "data_offset": 0, 00:10:37.429 "data_size": 63488 00:10:37.429 }, 00:10:37.429 { 00:10:37.429 "name": null, 00:10:37.429 "uuid": "94e8438c-0693-44f4-8f0c-1af99e4492c3", 00:10:37.429 "is_configured": false, 00:10:37.429 "data_offset": 0, 00:10:37.429 "data_size": 63488 00:10:37.429 }, 00:10:37.429 { 00:10:37.429 "name": "BaseBdev3", 00:10:37.429 "uuid": "8f17ad16-c1f6-4a9d-b185-2dd0b8978a4e", 00:10:37.429 "is_configured": true, 00:10:37.429 "data_offset": 2048, 00:10:37.429 "data_size": 63488 00:10:37.429 }, 00:10:37.429 { 00:10:37.429 "name": "BaseBdev4", 00:10:37.429 "uuid": "21a0be45-dbe5-45e3-b74b-659bedc55ac8", 00:10:37.429 "is_configured": true, 00:10:37.429 "data_offset": 2048, 00:10:37.429 "data_size": 63488 00:10:37.429 } 00:10:37.429 ] 00:10:37.429 }' 00:10:37.429 08:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.429 08:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.999 [2024-10-05 08:47:14.261152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.999 "name": "Existed_Raid", 00:10:37.999 "uuid": "d2720341-a011-4f6d-b185-983349514320", 00:10:37.999 "strip_size_kb": 64, 00:10:37.999 "state": "configuring", 00:10:37.999 "raid_level": "raid0", 00:10:37.999 "superblock": true, 00:10:37.999 "num_base_bdevs": 4, 00:10:37.999 "num_base_bdevs_discovered": 3, 00:10:37.999 "num_base_bdevs_operational": 4, 00:10:37.999 "base_bdevs_list": [ 00:10:37.999 { 00:10:37.999 "name": null, 00:10:37.999 "uuid": "8edfe56c-46e7-4f7d-a971-ef1a2bbb48bc", 00:10:37.999 "is_configured": false, 00:10:37.999 "data_offset": 0, 00:10:37.999 "data_size": 63488 00:10:37.999 }, 00:10:37.999 { 00:10:37.999 "name": "BaseBdev2", 00:10:37.999 "uuid": "94e8438c-0693-44f4-8f0c-1af99e4492c3", 00:10:37.999 "is_configured": true, 00:10:37.999 "data_offset": 2048, 00:10:37.999 "data_size": 63488 00:10:37.999 }, 00:10:37.999 { 00:10:37.999 "name": "BaseBdev3", 00:10:37.999 "uuid": "8f17ad16-c1f6-4a9d-b185-2dd0b8978a4e", 00:10:37.999 "is_configured": true, 00:10:37.999 "data_offset": 2048, 00:10:37.999 "data_size": 63488 00:10:37.999 }, 00:10:37.999 { 00:10:37.999 "name": "BaseBdev4", 00:10:37.999 "uuid": "21a0be45-dbe5-45e3-b74b-659bedc55ac8", 00:10:37.999 "is_configured": true, 00:10:37.999 "data_offset": 2048, 00:10:37.999 "data_size": 63488 00:10:37.999 } 00:10:37.999 ] 00:10:37.999 }' 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.999 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.257 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.257 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.257 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:38.257 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.257 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8edfe56c-46e7-4f7d-a971-ef1a2bbb48bc 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.516 [2024-10-05 08:47:14.838187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:38.516 [2024-10-05 08:47:14.838523] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:38.516 [2024-10-05 08:47:14.838541] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:38.516 [2024-10-05 08:47:14.838821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:38.516 NewBaseBdev 00:10:38.516 [2024-10-05 08:47:14.838975] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:38.516 [2024-10-05 08:47:14.838987] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:38.516 [2024-10-05 08:47:14.839123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.516 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.516 [ 00:10:38.516 { 00:10:38.516 "name": "NewBaseBdev", 00:10:38.516 "aliases": [ 00:10:38.516 "8edfe56c-46e7-4f7d-a971-ef1a2bbb48bc" 00:10:38.516 ], 00:10:38.516 "product_name": "Malloc disk", 00:10:38.516 "block_size": 512, 00:10:38.516 "num_blocks": 65536, 00:10:38.517 "uuid": "8edfe56c-46e7-4f7d-a971-ef1a2bbb48bc", 00:10:38.517 "assigned_rate_limits": { 00:10:38.517 "rw_ios_per_sec": 0, 00:10:38.517 "rw_mbytes_per_sec": 0, 00:10:38.517 "r_mbytes_per_sec": 0, 00:10:38.517 "w_mbytes_per_sec": 0 00:10:38.517 }, 00:10:38.517 "claimed": true, 00:10:38.517 "claim_type": "exclusive_write", 00:10:38.517 "zoned": false, 00:10:38.517 "supported_io_types": { 00:10:38.517 "read": true, 00:10:38.517 "write": true, 00:10:38.517 "unmap": true, 00:10:38.517 "flush": true, 00:10:38.517 "reset": true, 00:10:38.517 "nvme_admin": false, 00:10:38.517 "nvme_io": false, 00:10:38.517 "nvme_io_md": false, 00:10:38.517 "write_zeroes": true, 00:10:38.517 "zcopy": true, 00:10:38.517 "get_zone_info": false, 00:10:38.517 "zone_management": false, 00:10:38.517 "zone_append": false, 00:10:38.517 "compare": false, 00:10:38.517 "compare_and_write": false, 00:10:38.517 "abort": true, 00:10:38.517 "seek_hole": false, 00:10:38.517 "seek_data": false, 00:10:38.517 "copy": true, 00:10:38.517 "nvme_iov_md": false 00:10:38.517 }, 00:10:38.517 "memory_domains": [ 00:10:38.517 { 00:10:38.517 "dma_device_id": "system", 00:10:38.517 "dma_device_type": 1 00:10:38.517 }, 00:10:38.517 { 00:10:38.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.517 "dma_device_type": 2 00:10:38.517 } 00:10:38.517 ], 00:10:38.517 "driver_specific": {} 00:10:38.517 } 00:10:38.517 ] 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.517 "name": "Existed_Raid", 00:10:38.517 "uuid": "d2720341-a011-4f6d-b185-983349514320", 00:10:38.517 "strip_size_kb": 64, 00:10:38.517 "state": "online", 00:10:38.517 "raid_level": "raid0", 00:10:38.517 "superblock": true, 00:10:38.517 "num_base_bdevs": 4, 00:10:38.517 "num_base_bdevs_discovered": 4, 00:10:38.517 "num_base_bdevs_operational": 4, 00:10:38.517 "base_bdevs_list": [ 00:10:38.517 { 00:10:38.517 "name": "NewBaseBdev", 00:10:38.517 "uuid": "8edfe56c-46e7-4f7d-a971-ef1a2bbb48bc", 00:10:38.517 "is_configured": true, 00:10:38.517 "data_offset": 2048, 00:10:38.517 "data_size": 63488 00:10:38.517 }, 00:10:38.517 { 00:10:38.517 "name": "BaseBdev2", 00:10:38.517 "uuid": "94e8438c-0693-44f4-8f0c-1af99e4492c3", 00:10:38.517 "is_configured": true, 00:10:38.517 "data_offset": 2048, 00:10:38.517 "data_size": 63488 00:10:38.517 }, 00:10:38.517 { 00:10:38.517 "name": "BaseBdev3", 00:10:38.517 "uuid": "8f17ad16-c1f6-4a9d-b185-2dd0b8978a4e", 00:10:38.517 "is_configured": true, 00:10:38.517 "data_offset": 2048, 00:10:38.517 "data_size": 63488 00:10:38.517 }, 00:10:38.517 { 00:10:38.517 "name": "BaseBdev4", 00:10:38.517 "uuid": "21a0be45-dbe5-45e3-b74b-659bedc55ac8", 00:10:38.517 "is_configured": true, 00:10:38.517 "data_offset": 2048, 00:10:38.517 "data_size": 63488 00:10:38.517 } 00:10:38.517 ] 00:10:38.517 }' 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.517 08:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.087 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:39.087 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:39.087 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:39.087 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:39.087 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:39.087 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:39.087 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:39.087 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:39.087 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.087 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.087 [2024-10-05 08:47:15.301771] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:39.087 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.087 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:39.087 "name": "Existed_Raid", 00:10:39.087 "aliases": [ 00:10:39.087 "d2720341-a011-4f6d-b185-983349514320" 00:10:39.087 ], 00:10:39.087 "product_name": "Raid Volume", 00:10:39.087 "block_size": 512, 00:10:39.087 "num_blocks": 253952, 00:10:39.087 "uuid": "d2720341-a011-4f6d-b185-983349514320", 00:10:39.087 "assigned_rate_limits": { 00:10:39.087 "rw_ios_per_sec": 0, 00:10:39.087 "rw_mbytes_per_sec": 0, 00:10:39.087 "r_mbytes_per_sec": 0, 00:10:39.087 "w_mbytes_per_sec": 0 00:10:39.087 }, 00:10:39.087 "claimed": false, 00:10:39.087 "zoned": false, 00:10:39.087 "supported_io_types": { 00:10:39.087 "read": true, 00:10:39.087 "write": true, 00:10:39.087 "unmap": true, 00:10:39.087 "flush": true, 00:10:39.087 "reset": true, 00:10:39.087 "nvme_admin": false, 00:10:39.087 "nvme_io": false, 00:10:39.087 "nvme_io_md": false, 00:10:39.087 "write_zeroes": true, 00:10:39.087 "zcopy": false, 00:10:39.087 "get_zone_info": false, 00:10:39.087 "zone_management": false, 00:10:39.087 "zone_append": false, 00:10:39.087 "compare": false, 00:10:39.087 "compare_and_write": false, 00:10:39.087 "abort": false, 00:10:39.087 "seek_hole": false, 00:10:39.087 "seek_data": false, 00:10:39.087 "copy": false, 00:10:39.087 "nvme_iov_md": false 00:10:39.087 }, 00:10:39.087 "memory_domains": [ 00:10:39.087 { 00:10:39.087 "dma_device_id": "system", 00:10:39.087 "dma_device_type": 1 00:10:39.087 }, 00:10:39.087 { 00:10:39.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.087 "dma_device_type": 2 00:10:39.087 }, 00:10:39.087 { 00:10:39.087 "dma_device_id": "system", 00:10:39.087 "dma_device_type": 1 00:10:39.087 }, 00:10:39.087 { 00:10:39.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.087 "dma_device_type": 2 00:10:39.087 }, 00:10:39.087 { 00:10:39.087 "dma_device_id": "system", 00:10:39.087 "dma_device_type": 1 00:10:39.087 }, 00:10:39.087 { 00:10:39.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.087 "dma_device_type": 2 00:10:39.087 }, 00:10:39.087 { 00:10:39.087 "dma_device_id": "system", 00:10:39.087 "dma_device_type": 1 00:10:39.087 }, 00:10:39.087 { 00:10:39.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.087 "dma_device_type": 2 00:10:39.087 } 00:10:39.087 ], 00:10:39.087 "driver_specific": { 00:10:39.087 "raid": { 00:10:39.087 "uuid": "d2720341-a011-4f6d-b185-983349514320", 00:10:39.087 "strip_size_kb": 64, 00:10:39.087 "state": "online", 00:10:39.087 "raid_level": "raid0", 00:10:39.087 "superblock": true, 00:10:39.087 "num_base_bdevs": 4, 00:10:39.087 "num_base_bdevs_discovered": 4, 00:10:39.088 "num_base_bdevs_operational": 4, 00:10:39.088 "base_bdevs_list": [ 00:10:39.088 { 00:10:39.088 "name": "NewBaseBdev", 00:10:39.088 "uuid": "8edfe56c-46e7-4f7d-a971-ef1a2bbb48bc", 00:10:39.088 "is_configured": true, 00:10:39.088 "data_offset": 2048, 00:10:39.088 "data_size": 63488 00:10:39.088 }, 00:10:39.088 { 00:10:39.088 "name": "BaseBdev2", 00:10:39.088 "uuid": "94e8438c-0693-44f4-8f0c-1af99e4492c3", 00:10:39.088 "is_configured": true, 00:10:39.088 "data_offset": 2048, 00:10:39.088 "data_size": 63488 00:10:39.088 }, 00:10:39.088 { 00:10:39.088 "name": "BaseBdev3", 00:10:39.088 "uuid": "8f17ad16-c1f6-4a9d-b185-2dd0b8978a4e", 00:10:39.088 "is_configured": true, 00:10:39.088 "data_offset": 2048, 00:10:39.088 "data_size": 63488 00:10:39.088 }, 00:10:39.088 { 00:10:39.088 "name": "BaseBdev4", 00:10:39.088 "uuid": "21a0be45-dbe5-45e3-b74b-659bedc55ac8", 00:10:39.088 "is_configured": true, 00:10:39.088 "data_offset": 2048, 00:10:39.088 "data_size": 63488 00:10:39.088 } 00:10:39.088 ] 00:10:39.088 } 00:10:39.088 } 00:10:39.088 }' 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:39.088 BaseBdev2 00:10:39.088 BaseBdev3 00:10:39.088 BaseBdev4' 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.088 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.348 [2024-10-05 08:47:15.592891] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.348 [2024-10-05 08:47:15.592975] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.348 [2024-10-05 08:47:15.593066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.348 [2024-10-05 08:47:15.593142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:39.348 [2024-10-05 08:47:15.593153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68829 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 68829 ']' 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 68829 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68829 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68829' 00:10:39.348 killing process with pid 68829 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 68829 00:10:39.348 [2024-10-05 08:47:15.640425] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:39.348 08:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 68829 00:10:39.607 [2024-10-05 08:47:16.062777] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:40.987 08:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:40.987 00:10:40.987 real 0m11.678s 00:10:40.987 user 0m18.097s 00:10:40.987 sys 0m2.235s 00:10:40.987 ************************************ 00:10:40.987 END TEST raid_state_function_test_sb 00:10:40.987 ************************************ 00:10:40.987 08:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.987 08:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.247 08:47:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:41.247 08:47:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:41.247 08:47:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.247 08:47:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:41.247 ************************************ 00:10:41.247 START TEST raid_superblock_test 00:10:41.247 ************************************ 00:10:41.247 08:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69429 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69429 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 69429 ']' 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.248 08:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.248 [2024-10-05 08:47:17.582933] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:41.248 [2024-10-05 08:47:17.583156] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69429 ] 00:10:41.507 [2024-10-05 08:47:17.745670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.767 [2024-10-05 08:47:17.998464] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.767 [2024-10-05 08:47:18.226820] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:41.767 [2024-10-05 08:47:18.226950] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.028 malloc1 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.028 [2024-10-05 08:47:18.462373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:42.028 [2024-10-05 08:47:18.462517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.028 [2024-10-05 08:47:18.462562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:42.028 [2024-10-05 08:47:18.462592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.028 [2024-10-05 08:47:18.464919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.028 [2024-10-05 08:47:18.465002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:42.028 pt1 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.028 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.289 malloc2 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.289 [2024-10-05 08:47:18.560915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:42.289 [2024-10-05 08:47:18.560984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.289 [2024-10-05 08:47:18.561011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:42.289 [2024-10-05 08:47:18.561021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.289 [2024-10-05 08:47:18.563274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.289 [2024-10-05 08:47:18.563305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:42.289 pt2 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.289 malloc3 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.289 [2024-10-05 08:47:18.621751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:42.289 [2024-10-05 08:47:18.621868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.289 [2024-10-05 08:47:18.621911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:42.289 [2024-10-05 08:47:18.621939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.289 [2024-10-05 08:47:18.624185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.289 [2024-10-05 08:47:18.624252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:42.289 pt3 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.289 malloc4 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.289 [2024-10-05 08:47:18.681491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:42.289 [2024-10-05 08:47:18.681585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:42.289 [2024-10-05 08:47:18.681619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:42.289 [2024-10-05 08:47:18.681642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:42.289 [2024-10-05 08:47:18.683838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:42.289 [2024-10-05 08:47:18.683905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:42.289 pt4 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.289 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.289 [2024-10-05 08:47:18.693538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:42.289 [2024-10-05 08:47:18.695496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:42.289 [2024-10-05 08:47:18.695591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:42.289 [2024-10-05 08:47:18.695667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:42.289 [2024-10-05 08:47:18.695871] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:42.289 [2024-10-05 08:47:18.695918] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:42.289 [2024-10-05 08:47:18.696182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:42.290 [2024-10-05 08:47:18.696373] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:42.290 [2024-10-05 08:47:18.696416] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:42.290 [2024-10-05 08:47:18.696591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.290 "name": "raid_bdev1", 00:10:42.290 "uuid": "7cc4c6a8-eb6b-4e50-b97e-7df667c743dd", 00:10:42.290 "strip_size_kb": 64, 00:10:42.290 "state": "online", 00:10:42.290 "raid_level": "raid0", 00:10:42.290 "superblock": true, 00:10:42.290 "num_base_bdevs": 4, 00:10:42.290 "num_base_bdevs_discovered": 4, 00:10:42.290 "num_base_bdevs_operational": 4, 00:10:42.290 "base_bdevs_list": [ 00:10:42.290 { 00:10:42.290 "name": "pt1", 00:10:42.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:42.290 "is_configured": true, 00:10:42.290 "data_offset": 2048, 00:10:42.290 "data_size": 63488 00:10:42.290 }, 00:10:42.290 { 00:10:42.290 "name": "pt2", 00:10:42.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:42.290 "is_configured": true, 00:10:42.290 "data_offset": 2048, 00:10:42.290 "data_size": 63488 00:10:42.290 }, 00:10:42.290 { 00:10:42.290 "name": "pt3", 00:10:42.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:42.290 "is_configured": true, 00:10:42.290 "data_offset": 2048, 00:10:42.290 "data_size": 63488 00:10:42.290 }, 00:10:42.290 { 00:10:42.290 "name": "pt4", 00:10:42.290 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:42.290 "is_configured": true, 00:10:42.290 "data_offset": 2048, 00:10:42.290 "data_size": 63488 00:10:42.290 } 00:10:42.290 ] 00:10:42.290 }' 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.290 08:47:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.861 [2024-10-05 08:47:19.129151] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.861 "name": "raid_bdev1", 00:10:42.861 "aliases": [ 00:10:42.861 "7cc4c6a8-eb6b-4e50-b97e-7df667c743dd" 00:10:42.861 ], 00:10:42.861 "product_name": "Raid Volume", 00:10:42.861 "block_size": 512, 00:10:42.861 "num_blocks": 253952, 00:10:42.861 "uuid": "7cc4c6a8-eb6b-4e50-b97e-7df667c743dd", 00:10:42.861 "assigned_rate_limits": { 00:10:42.861 "rw_ios_per_sec": 0, 00:10:42.861 "rw_mbytes_per_sec": 0, 00:10:42.861 "r_mbytes_per_sec": 0, 00:10:42.861 "w_mbytes_per_sec": 0 00:10:42.861 }, 00:10:42.861 "claimed": false, 00:10:42.861 "zoned": false, 00:10:42.861 "supported_io_types": { 00:10:42.861 "read": true, 00:10:42.861 "write": true, 00:10:42.861 "unmap": true, 00:10:42.861 "flush": true, 00:10:42.861 "reset": true, 00:10:42.861 "nvme_admin": false, 00:10:42.861 "nvme_io": false, 00:10:42.861 "nvme_io_md": false, 00:10:42.861 "write_zeroes": true, 00:10:42.861 "zcopy": false, 00:10:42.861 "get_zone_info": false, 00:10:42.861 "zone_management": false, 00:10:42.861 "zone_append": false, 00:10:42.861 "compare": false, 00:10:42.861 "compare_and_write": false, 00:10:42.861 "abort": false, 00:10:42.861 "seek_hole": false, 00:10:42.861 "seek_data": false, 00:10:42.861 "copy": false, 00:10:42.861 "nvme_iov_md": false 00:10:42.861 }, 00:10:42.861 "memory_domains": [ 00:10:42.861 { 00:10:42.861 "dma_device_id": "system", 00:10:42.861 "dma_device_type": 1 00:10:42.861 }, 00:10:42.861 { 00:10:42.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.861 "dma_device_type": 2 00:10:42.861 }, 00:10:42.861 { 00:10:42.861 "dma_device_id": "system", 00:10:42.861 "dma_device_type": 1 00:10:42.861 }, 00:10:42.861 { 00:10:42.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.861 "dma_device_type": 2 00:10:42.861 }, 00:10:42.861 { 00:10:42.861 "dma_device_id": "system", 00:10:42.861 "dma_device_type": 1 00:10:42.861 }, 00:10:42.861 { 00:10:42.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.861 "dma_device_type": 2 00:10:42.861 }, 00:10:42.861 { 00:10:42.861 "dma_device_id": "system", 00:10:42.861 "dma_device_type": 1 00:10:42.861 }, 00:10:42.861 { 00:10:42.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.861 "dma_device_type": 2 00:10:42.861 } 00:10:42.861 ], 00:10:42.861 "driver_specific": { 00:10:42.861 "raid": { 00:10:42.861 "uuid": "7cc4c6a8-eb6b-4e50-b97e-7df667c743dd", 00:10:42.861 "strip_size_kb": 64, 00:10:42.861 "state": "online", 00:10:42.861 "raid_level": "raid0", 00:10:42.861 "superblock": true, 00:10:42.861 "num_base_bdevs": 4, 00:10:42.861 "num_base_bdevs_discovered": 4, 00:10:42.861 "num_base_bdevs_operational": 4, 00:10:42.861 "base_bdevs_list": [ 00:10:42.861 { 00:10:42.861 "name": "pt1", 00:10:42.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:42.861 "is_configured": true, 00:10:42.861 "data_offset": 2048, 00:10:42.861 "data_size": 63488 00:10:42.861 }, 00:10:42.861 { 00:10:42.861 "name": "pt2", 00:10:42.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:42.861 "is_configured": true, 00:10:42.861 "data_offset": 2048, 00:10:42.861 "data_size": 63488 00:10:42.861 }, 00:10:42.861 { 00:10:42.861 "name": "pt3", 00:10:42.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:42.861 "is_configured": true, 00:10:42.861 "data_offset": 2048, 00:10:42.861 "data_size": 63488 00:10:42.861 }, 00:10:42.861 { 00:10:42.861 "name": "pt4", 00:10:42.861 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:42.861 "is_configured": true, 00:10:42.861 "data_offset": 2048, 00:10:42.861 "data_size": 63488 00:10:42.861 } 00:10:42.861 ] 00:10:42.861 } 00:10:42.861 } 00:10:42.861 }' 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:42.861 pt2 00:10:42.861 pt3 00:10:42.861 pt4' 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.861 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.862 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:42.862 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.862 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.122 [2024-10-05 08:47:19.448455] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7cc4c6a8-eb6b-4e50-b97e-7df667c743dd 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7cc4c6a8-eb6b-4e50-b97e-7df667c743dd ']' 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.122 [2024-10-05 08:47:19.492107] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:43.122 [2024-10-05 08:47:19.492134] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.122 [2024-10-05 08:47:19.492215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.122 [2024-10-05 08:47:19.492303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.122 [2024-10-05 08:47:19.492319] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:43.122 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.123 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.123 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.123 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:43.123 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:43.123 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.123 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.383 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.383 [2024-10-05 08:47:19.655860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:43.383 [2024-10-05 08:47:19.657946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:43.383 [2024-10-05 08:47:19.658043] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:43.383 [2024-10-05 08:47:19.658096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:43.383 [2024-10-05 08:47:19.658172] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:43.383 [2024-10-05 08:47:19.658256] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:43.383 [2024-10-05 08:47:19.658315] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:43.383 [2024-10-05 08:47:19.658374] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:43.383 [2024-10-05 08:47:19.658422] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:43.383 [2024-10-05 08:47:19.658451] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:43.383 request: 00:10:43.383 { 00:10:43.383 "name": "raid_bdev1", 00:10:43.383 "raid_level": "raid0", 00:10:43.383 "base_bdevs": [ 00:10:43.383 "malloc1", 00:10:43.383 "malloc2", 00:10:43.383 "malloc3", 00:10:43.383 "malloc4" 00:10:43.383 ], 00:10:43.383 "strip_size_kb": 64, 00:10:43.383 "superblock": false, 00:10:43.383 "method": "bdev_raid_create", 00:10:43.383 "req_id": 1 00:10:43.383 } 00:10:43.383 Got JSON-RPC error response 00:10:43.383 response: 00:10:43.383 { 00:10:43.383 "code": -17, 00:10:43.384 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:43.384 } 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.384 [2024-10-05 08:47:19.723717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:43.384 [2024-10-05 08:47:19.723763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.384 [2024-10-05 08:47:19.723781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:43.384 [2024-10-05 08:47:19.723793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.384 [2024-10-05 08:47:19.726204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.384 [2024-10-05 08:47:19.726239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:43.384 [2024-10-05 08:47:19.726307] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:43.384 [2024-10-05 08:47:19.726362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:43.384 pt1 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.384 "name": "raid_bdev1", 00:10:43.384 "uuid": "7cc4c6a8-eb6b-4e50-b97e-7df667c743dd", 00:10:43.384 "strip_size_kb": 64, 00:10:43.384 "state": "configuring", 00:10:43.384 "raid_level": "raid0", 00:10:43.384 "superblock": true, 00:10:43.384 "num_base_bdevs": 4, 00:10:43.384 "num_base_bdevs_discovered": 1, 00:10:43.384 "num_base_bdevs_operational": 4, 00:10:43.384 "base_bdevs_list": [ 00:10:43.384 { 00:10:43.384 "name": "pt1", 00:10:43.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.384 "is_configured": true, 00:10:43.384 "data_offset": 2048, 00:10:43.384 "data_size": 63488 00:10:43.384 }, 00:10:43.384 { 00:10:43.384 "name": null, 00:10:43.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.384 "is_configured": false, 00:10:43.384 "data_offset": 2048, 00:10:43.384 "data_size": 63488 00:10:43.384 }, 00:10:43.384 { 00:10:43.384 "name": null, 00:10:43.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.384 "is_configured": false, 00:10:43.384 "data_offset": 2048, 00:10:43.384 "data_size": 63488 00:10:43.384 }, 00:10:43.384 { 00:10:43.384 "name": null, 00:10:43.384 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:43.384 "is_configured": false, 00:10:43.384 "data_offset": 2048, 00:10:43.384 "data_size": 63488 00:10:43.384 } 00:10:43.384 ] 00:10:43.384 }' 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.384 08:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.959 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.960 [2024-10-05 08:47:20.147054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:43.960 [2024-10-05 08:47:20.147186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.960 [2024-10-05 08:47:20.147235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:43.960 [2024-10-05 08:47:20.147270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.960 [2024-10-05 08:47:20.147808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.960 [2024-10-05 08:47:20.147868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:43.960 [2024-10-05 08:47:20.148014] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:43.960 [2024-10-05 08:47:20.148070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:43.960 pt2 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.960 [2024-10-05 08:47:20.159025] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.960 "name": "raid_bdev1", 00:10:43.960 "uuid": "7cc4c6a8-eb6b-4e50-b97e-7df667c743dd", 00:10:43.960 "strip_size_kb": 64, 00:10:43.960 "state": "configuring", 00:10:43.960 "raid_level": "raid0", 00:10:43.960 "superblock": true, 00:10:43.960 "num_base_bdevs": 4, 00:10:43.960 "num_base_bdevs_discovered": 1, 00:10:43.960 "num_base_bdevs_operational": 4, 00:10:43.960 "base_bdevs_list": [ 00:10:43.960 { 00:10:43.960 "name": "pt1", 00:10:43.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.960 "is_configured": true, 00:10:43.960 "data_offset": 2048, 00:10:43.960 "data_size": 63488 00:10:43.960 }, 00:10:43.960 { 00:10:43.960 "name": null, 00:10:43.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.960 "is_configured": false, 00:10:43.960 "data_offset": 0, 00:10:43.960 "data_size": 63488 00:10:43.960 }, 00:10:43.960 { 00:10:43.960 "name": null, 00:10:43.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.960 "is_configured": false, 00:10:43.960 "data_offset": 2048, 00:10:43.960 "data_size": 63488 00:10:43.960 }, 00:10:43.960 { 00:10:43.960 "name": null, 00:10:43.960 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:43.960 "is_configured": false, 00:10:43.960 "data_offset": 2048, 00:10:43.960 "data_size": 63488 00:10:43.960 } 00:10:43.960 ] 00:10:43.960 }' 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.960 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.221 [2024-10-05 08:47:20.614192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.221 [2024-10-05 08:47:20.614247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.221 [2024-10-05 08:47:20.614268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:44.221 [2024-10-05 08:47:20.614277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.221 [2024-10-05 08:47:20.614736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.221 [2024-10-05 08:47:20.614758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.221 [2024-10-05 08:47:20.614842] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:44.221 [2024-10-05 08:47:20.614879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.221 pt2 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.221 [2024-10-05 08:47:20.622168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:44.221 [2024-10-05 08:47:20.622213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.221 [2024-10-05 08:47:20.622239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:44.221 [2024-10-05 08:47:20.622249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.221 [2024-10-05 08:47:20.622628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.221 [2024-10-05 08:47:20.622647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:44.221 [2024-10-05 08:47:20.622706] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:44.221 [2024-10-05 08:47:20.622728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:44.221 pt3 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.221 [2024-10-05 08:47:20.630127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:44.221 [2024-10-05 08:47:20.630172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.221 [2024-10-05 08:47:20.630190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:44.221 [2024-10-05 08:47:20.630197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.221 [2024-10-05 08:47:20.630555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.221 [2024-10-05 08:47:20.630570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:44.221 [2024-10-05 08:47:20.630630] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:44.221 [2024-10-05 08:47:20.630652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:44.221 [2024-10-05 08:47:20.630796] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:44.221 [2024-10-05 08:47:20.630804] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:44.221 [2024-10-05 08:47:20.631056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:44.221 [2024-10-05 08:47:20.631213] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:44.221 [2024-10-05 08:47:20.631226] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:44.221 [2024-10-05 08:47:20.631349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.221 pt4 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.221 "name": "raid_bdev1", 00:10:44.221 "uuid": "7cc4c6a8-eb6b-4e50-b97e-7df667c743dd", 00:10:44.221 "strip_size_kb": 64, 00:10:44.221 "state": "online", 00:10:44.221 "raid_level": "raid0", 00:10:44.221 "superblock": true, 00:10:44.221 "num_base_bdevs": 4, 00:10:44.221 "num_base_bdevs_discovered": 4, 00:10:44.221 "num_base_bdevs_operational": 4, 00:10:44.221 "base_bdevs_list": [ 00:10:44.221 { 00:10:44.221 "name": "pt1", 00:10:44.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.221 "is_configured": true, 00:10:44.221 "data_offset": 2048, 00:10:44.221 "data_size": 63488 00:10:44.221 }, 00:10:44.221 { 00:10:44.221 "name": "pt2", 00:10:44.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.221 "is_configured": true, 00:10:44.221 "data_offset": 2048, 00:10:44.221 "data_size": 63488 00:10:44.221 }, 00:10:44.221 { 00:10:44.221 "name": "pt3", 00:10:44.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.221 "is_configured": true, 00:10:44.221 "data_offset": 2048, 00:10:44.221 "data_size": 63488 00:10:44.221 }, 00:10:44.221 { 00:10:44.221 "name": "pt4", 00:10:44.221 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.221 "is_configured": true, 00:10:44.221 "data_offset": 2048, 00:10:44.221 "data_size": 63488 00:10:44.221 } 00:10:44.221 ] 00:10:44.221 }' 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.221 08:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.792 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:44.792 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:44.792 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.792 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.792 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.792 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.792 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.792 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.792 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.792 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.792 [2024-10-05 08:47:21.041714] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.792 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.792 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.792 "name": "raid_bdev1", 00:10:44.792 "aliases": [ 00:10:44.792 "7cc4c6a8-eb6b-4e50-b97e-7df667c743dd" 00:10:44.792 ], 00:10:44.792 "product_name": "Raid Volume", 00:10:44.792 "block_size": 512, 00:10:44.792 "num_blocks": 253952, 00:10:44.792 "uuid": "7cc4c6a8-eb6b-4e50-b97e-7df667c743dd", 00:10:44.792 "assigned_rate_limits": { 00:10:44.792 "rw_ios_per_sec": 0, 00:10:44.792 "rw_mbytes_per_sec": 0, 00:10:44.792 "r_mbytes_per_sec": 0, 00:10:44.792 "w_mbytes_per_sec": 0 00:10:44.792 }, 00:10:44.792 "claimed": false, 00:10:44.792 "zoned": false, 00:10:44.792 "supported_io_types": { 00:10:44.792 "read": true, 00:10:44.792 "write": true, 00:10:44.792 "unmap": true, 00:10:44.792 "flush": true, 00:10:44.792 "reset": true, 00:10:44.792 "nvme_admin": false, 00:10:44.792 "nvme_io": false, 00:10:44.792 "nvme_io_md": false, 00:10:44.792 "write_zeroes": true, 00:10:44.792 "zcopy": false, 00:10:44.792 "get_zone_info": false, 00:10:44.792 "zone_management": false, 00:10:44.792 "zone_append": false, 00:10:44.792 "compare": false, 00:10:44.792 "compare_and_write": false, 00:10:44.792 "abort": false, 00:10:44.792 "seek_hole": false, 00:10:44.792 "seek_data": false, 00:10:44.792 "copy": false, 00:10:44.792 "nvme_iov_md": false 00:10:44.792 }, 00:10:44.792 "memory_domains": [ 00:10:44.792 { 00:10:44.792 "dma_device_id": "system", 00:10:44.792 "dma_device_type": 1 00:10:44.792 }, 00:10:44.792 { 00:10:44.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.792 "dma_device_type": 2 00:10:44.792 }, 00:10:44.792 { 00:10:44.792 "dma_device_id": "system", 00:10:44.792 "dma_device_type": 1 00:10:44.792 }, 00:10:44.792 { 00:10:44.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.792 "dma_device_type": 2 00:10:44.792 }, 00:10:44.792 { 00:10:44.792 "dma_device_id": "system", 00:10:44.792 "dma_device_type": 1 00:10:44.792 }, 00:10:44.792 { 00:10:44.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.792 "dma_device_type": 2 00:10:44.792 }, 00:10:44.792 { 00:10:44.792 "dma_device_id": "system", 00:10:44.792 "dma_device_type": 1 00:10:44.792 }, 00:10:44.792 { 00:10:44.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.792 "dma_device_type": 2 00:10:44.792 } 00:10:44.792 ], 00:10:44.792 "driver_specific": { 00:10:44.792 "raid": { 00:10:44.792 "uuid": "7cc4c6a8-eb6b-4e50-b97e-7df667c743dd", 00:10:44.792 "strip_size_kb": 64, 00:10:44.792 "state": "online", 00:10:44.792 "raid_level": "raid0", 00:10:44.792 "superblock": true, 00:10:44.792 "num_base_bdevs": 4, 00:10:44.792 "num_base_bdevs_discovered": 4, 00:10:44.792 "num_base_bdevs_operational": 4, 00:10:44.792 "base_bdevs_list": [ 00:10:44.792 { 00:10:44.792 "name": "pt1", 00:10:44.792 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.792 "is_configured": true, 00:10:44.792 "data_offset": 2048, 00:10:44.792 "data_size": 63488 00:10:44.792 }, 00:10:44.793 { 00:10:44.793 "name": "pt2", 00:10:44.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.793 "is_configured": true, 00:10:44.793 "data_offset": 2048, 00:10:44.793 "data_size": 63488 00:10:44.793 }, 00:10:44.793 { 00:10:44.793 "name": "pt3", 00:10:44.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.793 "is_configured": true, 00:10:44.793 "data_offset": 2048, 00:10:44.793 "data_size": 63488 00:10:44.793 }, 00:10:44.793 { 00:10:44.793 "name": "pt4", 00:10:44.793 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.793 "is_configured": true, 00:10:44.793 "data_offset": 2048, 00:10:44.793 "data_size": 63488 00:10:44.793 } 00:10:44.793 ] 00:10:44.793 } 00:10:44.793 } 00:10:44.793 }' 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:44.793 pt2 00:10:44.793 pt3 00:10:44.793 pt4' 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.793 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.054 [2024-10-05 08:47:21.361156] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7cc4c6a8-eb6b-4e50-b97e-7df667c743dd '!=' 7cc4c6a8-eb6b-4e50-b97e-7df667c743dd ']' 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69429 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 69429 ']' 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 69429 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69429 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69429' 00:10:45.054 killing process with pid 69429 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 69429 00:10:45.054 [2024-10-05 08:47:21.432307] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.054 [2024-10-05 08:47:21.432437] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.054 08:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 69429 00:10:45.054 [2024-10-05 08:47:21.432541] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.054 [2024-10-05 08:47:21.432553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:45.625 [2024-10-05 08:47:21.845598] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.014 08:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:47.014 00:10:47.014 real 0m5.687s 00:10:47.014 user 0m7.811s 00:10:47.014 sys 0m1.115s 00:10:47.014 08:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.014 08:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.014 ************************************ 00:10:47.014 END TEST raid_superblock_test 00:10:47.014 ************************************ 00:10:47.014 08:47:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:47.014 08:47:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:47.014 08:47:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.014 08:47:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.014 ************************************ 00:10:47.014 START TEST raid_read_error_test 00:10:47.014 ************************************ 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.014 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Rxly5BDHSt 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69658 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69658 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69658 ']' 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.015 08:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.015 [2024-10-05 08:47:23.370303] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:47.015 [2024-10-05 08:47:23.370505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69658 ] 00:10:47.275 [2024-10-05 08:47:23.540409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.535 [2024-10-05 08:47:23.789595] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.795 [2024-10-05 08:47:24.025345] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.795 [2024-10-05 08:47:24.025486] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.795 BaseBdev1_malloc 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.795 true 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.795 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.795 [2024-10-05 08:47:24.262562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:47.795 [2024-10-05 08:47:24.262629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.795 [2024-10-05 08:47:24.262646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:47.795 [2024-10-05 08:47:24.262657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.795 [2024-10-05 08:47:24.265022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.795 [2024-10-05 08:47:24.265130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:48.055 BaseBdev1 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.055 BaseBdev2_malloc 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.055 true 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.055 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.055 [2024-10-05 08:47:24.347690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:48.055 [2024-10-05 08:47:24.347754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.055 [2024-10-05 08:47:24.347770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:48.055 [2024-10-05 08:47:24.347781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.056 [2024-10-05 08:47:24.350152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.056 [2024-10-05 08:47:24.350189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:48.056 BaseBdev2 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.056 BaseBdev3_malloc 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.056 true 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.056 [2024-10-05 08:47:24.421062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:48.056 [2024-10-05 08:47:24.421117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.056 [2024-10-05 08:47:24.421134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:48.056 [2024-10-05 08:47:24.421146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.056 [2024-10-05 08:47:24.423552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.056 [2024-10-05 08:47:24.423592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:48.056 BaseBdev3 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.056 BaseBdev4_malloc 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.056 true 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.056 [2024-10-05 08:47:24.495334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:48.056 [2024-10-05 08:47:24.495391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.056 [2024-10-05 08:47:24.495410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:48.056 [2024-10-05 08:47:24.495423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.056 [2024-10-05 08:47:24.497791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.056 [2024-10-05 08:47:24.497872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:48.056 BaseBdev4 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.056 [2024-10-05 08:47:24.507404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.056 [2024-10-05 08:47:24.509550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.056 [2024-10-05 08:47:24.509670] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.056 [2024-10-05 08:47:24.509767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:48.056 [2024-10-05 08:47:24.510050] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:48.056 [2024-10-05 08:47:24.510100] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.056 [2024-10-05 08:47:24.510374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:48.056 [2024-10-05 08:47:24.510562] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:48.056 [2024-10-05 08:47:24.510599] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:48.056 [2024-10-05 08:47:24.510800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.056 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.316 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.316 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.316 "name": "raid_bdev1", 00:10:48.316 "uuid": "627f8483-31cd-4ce2-8d06-e9a378092d65", 00:10:48.316 "strip_size_kb": 64, 00:10:48.316 "state": "online", 00:10:48.316 "raid_level": "raid0", 00:10:48.316 "superblock": true, 00:10:48.316 "num_base_bdevs": 4, 00:10:48.316 "num_base_bdevs_discovered": 4, 00:10:48.316 "num_base_bdevs_operational": 4, 00:10:48.316 "base_bdevs_list": [ 00:10:48.316 { 00:10:48.316 "name": "BaseBdev1", 00:10:48.316 "uuid": "9abb19c8-6455-5ea5-aa4e-29116c266020", 00:10:48.316 "is_configured": true, 00:10:48.316 "data_offset": 2048, 00:10:48.316 "data_size": 63488 00:10:48.316 }, 00:10:48.316 { 00:10:48.316 "name": "BaseBdev2", 00:10:48.316 "uuid": "e9e17982-b0ec-5103-8a97-f59d4fde4e94", 00:10:48.316 "is_configured": true, 00:10:48.316 "data_offset": 2048, 00:10:48.316 "data_size": 63488 00:10:48.316 }, 00:10:48.316 { 00:10:48.316 "name": "BaseBdev3", 00:10:48.316 "uuid": "eea37d99-a2b3-545e-89c6-831f5c3d5cd0", 00:10:48.316 "is_configured": true, 00:10:48.316 "data_offset": 2048, 00:10:48.316 "data_size": 63488 00:10:48.316 }, 00:10:48.316 { 00:10:48.316 "name": "BaseBdev4", 00:10:48.316 "uuid": "5b110573-ce31-50db-8951-dc54f48f8ad9", 00:10:48.316 "is_configured": true, 00:10:48.316 "data_offset": 2048, 00:10:48.316 "data_size": 63488 00:10:48.316 } 00:10:48.316 ] 00:10:48.316 }' 00:10:48.316 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.316 08:47:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.575 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:48.575 08:47:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:48.834 [2024-10-05 08:47:25.051993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.773 08:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.773 08:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.773 "name": "raid_bdev1", 00:10:49.773 "uuid": "627f8483-31cd-4ce2-8d06-e9a378092d65", 00:10:49.773 "strip_size_kb": 64, 00:10:49.773 "state": "online", 00:10:49.773 "raid_level": "raid0", 00:10:49.773 "superblock": true, 00:10:49.773 "num_base_bdevs": 4, 00:10:49.773 "num_base_bdevs_discovered": 4, 00:10:49.773 "num_base_bdevs_operational": 4, 00:10:49.773 "base_bdevs_list": [ 00:10:49.773 { 00:10:49.773 "name": "BaseBdev1", 00:10:49.773 "uuid": "9abb19c8-6455-5ea5-aa4e-29116c266020", 00:10:49.773 "is_configured": true, 00:10:49.773 "data_offset": 2048, 00:10:49.773 "data_size": 63488 00:10:49.773 }, 00:10:49.773 { 00:10:49.773 "name": "BaseBdev2", 00:10:49.773 "uuid": "e9e17982-b0ec-5103-8a97-f59d4fde4e94", 00:10:49.773 "is_configured": true, 00:10:49.773 "data_offset": 2048, 00:10:49.773 "data_size": 63488 00:10:49.773 }, 00:10:49.773 { 00:10:49.773 "name": "BaseBdev3", 00:10:49.773 "uuid": "eea37d99-a2b3-545e-89c6-831f5c3d5cd0", 00:10:49.773 "is_configured": true, 00:10:49.773 "data_offset": 2048, 00:10:49.773 "data_size": 63488 00:10:49.773 }, 00:10:49.773 { 00:10:49.773 "name": "BaseBdev4", 00:10:49.773 "uuid": "5b110573-ce31-50db-8951-dc54f48f8ad9", 00:10:49.773 "is_configured": true, 00:10:49.773 "data_offset": 2048, 00:10:49.773 "data_size": 63488 00:10:49.773 } 00:10:49.773 ] 00:10:49.773 }' 00:10:49.773 08:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.773 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.034 [2024-10-05 08:47:26.408627] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.034 [2024-10-05 08:47:26.408761] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.034 [2024-10-05 08:47:26.411376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.034 [2024-10-05 08:47:26.411485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.034 [2024-10-05 08:47:26.411552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.034 [2024-10-05 08:47:26.411605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:50.034 { 00:10:50.034 "results": [ 00:10:50.034 { 00:10:50.034 "job": "raid_bdev1", 00:10:50.034 "core_mask": "0x1", 00:10:50.034 "workload": "randrw", 00:10:50.034 "percentage": 50, 00:10:50.034 "status": "finished", 00:10:50.034 "queue_depth": 1, 00:10:50.034 "io_size": 131072, 00:10:50.034 "runtime": 1.357239, 00:10:50.034 "iops": 13931.223609106428, 00:10:50.034 "mibps": 1741.4029511383035, 00:10:50.034 "io_failed": 1, 00:10:50.034 "io_timeout": 0, 00:10:50.034 "avg_latency_us": 101.22761089021863, 00:10:50.034 "min_latency_us": 25.3764192139738, 00:10:50.034 "max_latency_us": 1380.8349344978167 00:10:50.034 } 00:10:50.034 ], 00:10:50.034 "core_count": 1 00:10:50.034 } 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69658 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69658 ']' 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69658 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69658 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69658' 00:10:50.034 killing process with pid 69658 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69658 00:10:50.034 [2024-10-05 08:47:26.460523] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.034 08:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69658 00:10:50.603 [2024-10-05 08:47:26.812758] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:51.983 08:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Rxly5BDHSt 00:10:51.983 08:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:51.983 08:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:51.983 08:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:51.983 08:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:51.983 08:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.983 08:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:51.983 08:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:51.983 00:10:51.983 real 0m4.988s 00:10:51.983 user 0m5.658s 00:10:51.983 sys 0m0.741s 00:10:51.983 08:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.983 ************************************ 00:10:51.983 END TEST raid_read_error_test 00:10:51.983 ************************************ 00:10:51.983 08:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.983 08:47:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:51.983 08:47:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:51.983 08:47:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.983 08:47:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.983 ************************************ 00:10:51.983 START TEST raid_write_error_test 00:10:51.983 ************************************ 00:10:51.983 08:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wxOMH0AHvh 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69779 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69779 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69779 ']' 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.984 08:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.984 [2024-10-05 08:47:28.422065] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:51.984 [2024-10-05 08:47:28.422178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69779 ] 00:10:52.243 [2024-10-05 08:47:28.584812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.502 [2024-10-05 08:47:28.833847] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.762 [2024-10-05 08:47:29.063038] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.762 [2024-10-05 08:47:29.063070] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.022 BaseBdev1_malloc 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.022 true 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.022 [2024-10-05 08:47:29.316454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:53.022 [2024-10-05 08:47:29.316600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.022 [2024-10-05 08:47:29.316634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:53.022 [2024-10-05 08:47:29.316665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.022 [2024-10-05 08:47:29.319068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.022 [2024-10-05 08:47:29.319141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:53.022 BaseBdev1 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.022 BaseBdev2_malloc 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.022 true 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.022 [2024-10-05 08:47:29.402072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:53.022 [2024-10-05 08:47:29.402139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.022 [2024-10-05 08:47:29.402156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:53.022 [2024-10-05 08:47:29.402168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.022 [2024-10-05 08:47:29.404541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.022 [2024-10-05 08:47:29.404581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:53.022 BaseBdev2 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.022 BaseBdev3_malloc 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.022 true 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.022 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.023 [2024-10-05 08:47:29.473942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:53.023 [2024-10-05 08:47:29.474011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.023 [2024-10-05 08:47:29.474028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:53.023 [2024-10-05 08:47:29.474039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.023 [2024-10-05 08:47:29.476418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.023 [2024-10-05 08:47:29.476471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:53.023 BaseBdev3 00:10:53.023 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.023 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.023 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:53.023 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.023 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.283 BaseBdev4_malloc 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.283 true 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.283 [2024-10-05 08:47:29.547686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:53.283 [2024-10-05 08:47:29.547747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.283 [2024-10-05 08:47:29.547764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:53.283 [2024-10-05 08:47:29.547777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.283 [2024-10-05 08:47:29.550091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.283 [2024-10-05 08:47:29.550129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:53.283 BaseBdev4 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.283 [2024-10-05 08:47:29.559754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.283 [2024-10-05 08:47:29.561863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.283 [2024-10-05 08:47:29.562025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.283 [2024-10-05 08:47:29.562091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:53.283 [2024-10-05 08:47:29.562308] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:53.283 [2024-10-05 08:47:29.562322] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:53.283 [2024-10-05 08:47:29.562558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:53.283 [2024-10-05 08:47:29.562716] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:53.283 [2024-10-05 08:47:29.562724] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:53.283 [2024-10-05 08:47:29.562871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.283 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.283 "name": "raid_bdev1", 00:10:53.283 "uuid": "f8fe40c7-76ff-4821-8690-3e3d578cc93d", 00:10:53.283 "strip_size_kb": 64, 00:10:53.283 "state": "online", 00:10:53.283 "raid_level": "raid0", 00:10:53.283 "superblock": true, 00:10:53.283 "num_base_bdevs": 4, 00:10:53.283 "num_base_bdevs_discovered": 4, 00:10:53.283 "num_base_bdevs_operational": 4, 00:10:53.283 "base_bdevs_list": [ 00:10:53.283 { 00:10:53.283 "name": "BaseBdev1", 00:10:53.283 "uuid": "934944a1-357f-5517-9699-95ee8adb2a99", 00:10:53.283 "is_configured": true, 00:10:53.283 "data_offset": 2048, 00:10:53.283 "data_size": 63488 00:10:53.283 }, 00:10:53.283 { 00:10:53.283 "name": "BaseBdev2", 00:10:53.283 "uuid": "294ef0dc-3863-55f2-9f4b-6ff52e2c026b", 00:10:53.283 "is_configured": true, 00:10:53.283 "data_offset": 2048, 00:10:53.283 "data_size": 63488 00:10:53.283 }, 00:10:53.283 { 00:10:53.284 "name": "BaseBdev3", 00:10:53.284 "uuid": "59760cbc-1c58-52b5-ae9a-c63ff496ec54", 00:10:53.284 "is_configured": true, 00:10:53.284 "data_offset": 2048, 00:10:53.284 "data_size": 63488 00:10:53.284 }, 00:10:53.284 { 00:10:53.284 "name": "BaseBdev4", 00:10:53.284 "uuid": "413f680b-6c58-5ae1-ae83-a814c9197e5a", 00:10:53.284 "is_configured": true, 00:10:53.284 "data_offset": 2048, 00:10:53.284 "data_size": 63488 00:10:53.284 } 00:10:53.284 ] 00:10:53.284 }' 00:10:53.284 08:47:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.284 08:47:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.855 08:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:53.855 08:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:53.855 [2024-10-05 08:47:30.096267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.819 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.819 "name": "raid_bdev1", 00:10:54.819 "uuid": "f8fe40c7-76ff-4821-8690-3e3d578cc93d", 00:10:54.819 "strip_size_kb": 64, 00:10:54.819 "state": "online", 00:10:54.819 "raid_level": "raid0", 00:10:54.819 "superblock": true, 00:10:54.819 "num_base_bdevs": 4, 00:10:54.819 "num_base_bdevs_discovered": 4, 00:10:54.819 "num_base_bdevs_operational": 4, 00:10:54.819 "base_bdevs_list": [ 00:10:54.819 { 00:10:54.819 "name": "BaseBdev1", 00:10:54.820 "uuid": "934944a1-357f-5517-9699-95ee8adb2a99", 00:10:54.820 "is_configured": true, 00:10:54.820 "data_offset": 2048, 00:10:54.820 "data_size": 63488 00:10:54.820 }, 00:10:54.820 { 00:10:54.820 "name": "BaseBdev2", 00:10:54.820 "uuid": "294ef0dc-3863-55f2-9f4b-6ff52e2c026b", 00:10:54.820 "is_configured": true, 00:10:54.820 "data_offset": 2048, 00:10:54.820 "data_size": 63488 00:10:54.820 }, 00:10:54.820 { 00:10:54.820 "name": "BaseBdev3", 00:10:54.820 "uuid": "59760cbc-1c58-52b5-ae9a-c63ff496ec54", 00:10:54.820 "is_configured": true, 00:10:54.820 "data_offset": 2048, 00:10:54.820 "data_size": 63488 00:10:54.820 }, 00:10:54.820 { 00:10:54.820 "name": "BaseBdev4", 00:10:54.820 "uuid": "413f680b-6c58-5ae1-ae83-a814c9197e5a", 00:10:54.820 "is_configured": true, 00:10:54.820 "data_offset": 2048, 00:10:54.820 "data_size": 63488 00:10:54.820 } 00:10:54.820 ] 00:10:54.820 }' 00:10:54.820 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.820 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.080 [2024-10-05 08:47:31.472621] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:55.080 [2024-10-05 08:47:31.472667] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.080 [2024-10-05 08:47:31.475372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.080 [2024-10-05 08:47:31.475466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.080 [2024-10-05 08:47:31.475546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.080 [2024-10-05 08:47:31.475593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:55.080 { 00:10:55.080 "results": [ 00:10:55.080 { 00:10:55.080 "job": "raid_bdev1", 00:10:55.080 "core_mask": "0x1", 00:10:55.080 "workload": "randrw", 00:10:55.080 "percentage": 50, 00:10:55.080 "status": "finished", 00:10:55.080 "queue_depth": 1, 00:10:55.080 "io_size": 131072, 00:10:55.080 "runtime": 1.376819, 00:10:55.080 "iops": 14257.502257014175, 00:10:55.080 "mibps": 1782.187782126772, 00:10:55.080 "io_failed": 1, 00:10:55.080 "io_timeout": 0, 00:10:55.080 "avg_latency_us": 99.013265312705, 00:10:55.080 "min_latency_us": 24.146724890829695, 00:10:55.080 "max_latency_us": 1352.216593886463 00:10:55.080 } 00:10:55.080 ], 00:10:55.080 "core_count": 1 00:10:55.080 } 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69779 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69779 ']' 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69779 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69779 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69779' 00:10:55.080 killing process with pid 69779 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69779 00:10:55.080 [2024-10-05 08:47:31.506835] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.080 08:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69779 00:10:55.650 [2024-10-05 08:47:31.853521] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.031 08:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:57.031 08:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wxOMH0AHvh 00:10:57.031 08:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:57.031 08:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:57.031 08:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:57.031 08:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.031 08:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.031 08:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:57.031 00:10:57.031 real 0m4.939s 00:10:57.031 user 0m5.611s 00:10:57.031 sys 0m0.715s 00:10:57.031 08:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.031 ************************************ 00:10:57.031 END TEST raid_write_error_test 00:10:57.031 08:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.031 ************************************ 00:10:57.031 08:47:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:57.031 08:47:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:57.031 08:47:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:57.031 08:47:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.031 08:47:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.031 ************************************ 00:10:57.031 START TEST raid_state_function_test 00:10:57.031 ************************************ 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69898 00:10:57.031 Process raid pid: 69898 00:10:57.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69898' 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69898 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69898 ']' 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.031 08:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:57.031 [2024-10-05 08:47:33.426793] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:10:57.031 [2024-10-05 08:47:33.426924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.291 [2024-10-05 08:47:33.598118] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.551 [2024-10-05 08:47:33.844271] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.811 [2024-10-05 08:47:34.082861] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.812 [2024-10-05 08:47:34.082900] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.812 [2024-10-05 08:47:34.253014] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.812 [2024-10-05 08:47:34.253071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.812 [2024-10-05 08:47:34.253082] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.812 [2024-10-05 08:47:34.253092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.812 [2024-10-05 08:47:34.253098] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.812 [2024-10-05 08:47:34.253108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.812 [2024-10-05 08:47:34.253114] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:57.812 [2024-10-05 08:47:34.253123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.812 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.071 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.071 "name": "Existed_Raid", 00:10:58.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.071 "strip_size_kb": 64, 00:10:58.071 "state": "configuring", 00:10:58.071 "raid_level": "concat", 00:10:58.071 "superblock": false, 00:10:58.071 "num_base_bdevs": 4, 00:10:58.071 "num_base_bdevs_discovered": 0, 00:10:58.072 "num_base_bdevs_operational": 4, 00:10:58.072 "base_bdevs_list": [ 00:10:58.072 { 00:10:58.072 "name": "BaseBdev1", 00:10:58.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.072 "is_configured": false, 00:10:58.072 "data_offset": 0, 00:10:58.072 "data_size": 0 00:10:58.072 }, 00:10:58.072 { 00:10:58.072 "name": "BaseBdev2", 00:10:58.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.072 "is_configured": false, 00:10:58.072 "data_offset": 0, 00:10:58.072 "data_size": 0 00:10:58.072 }, 00:10:58.072 { 00:10:58.072 "name": "BaseBdev3", 00:10:58.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.072 "is_configured": false, 00:10:58.072 "data_offset": 0, 00:10:58.072 "data_size": 0 00:10:58.072 }, 00:10:58.072 { 00:10:58.072 "name": "BaseBdev4", 00:10:58.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.072 "is_configured": false, 00:10:58.072 "data_offset": 0, 00:10:58.072 "data_size": 0 00:10:58.072 } 00:10:58.072 ] 00:10:58.072 }' 00:10:58.072 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.072 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.332 [2024-10-05 08:47:34.680166] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:58.332 [2024-10-05 08:47:34.680214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.332 [2024-10-05 08:47:34.688180] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.332 [2024-10-05 08:47:34.688218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.332 [2024-10-05 08:47:34.688238] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:58.332 [2024-10-05 08:47:34.688248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:58.332 [2024-10-05 08:47:34.688255] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:58.332 [2024-10-05 08:47:34.688265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:58.332 [2024-10-05 08:47:34.688271] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:58.332 [2024-10-05 08:47:34.688281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.332 [2024-10-05 08:47:34.776757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.332 BaseBdev1 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.332 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.332 [ 00:10:58.332 { 00:10:58.332 "name": "BaseBdev1", 00:10:58.332 "aliases": [ 00:10:58.332 "88a85296-bbcf-47f9-b7e8-a43ee9bfc72a" 00:10:58.332 ], 00:10:58.332 "product_name": "Malloc disk", 00:10:58.332 "block_size": 512, 00:10:58.332 "num_blocks": 65536, 00:10:58.332 "uuid": "88a85296-bbcf-47f9-b7e8-a43ee9bfc72a", 00:10:58.332 "assigned_rate_limits": { 00:10:58.332 "rw_ios_per_sec": 0, 00:10:58.332 "rw_mbytes_per_sec": 0, 00:10:58.332 "r_mbytes_per_sec": 0, 00:10:58.332 "w_mbytes_per_sec": 0 00:10:58.332 }, 00:10:58.332 "claimed": true, 00:10:58.332 "claim_type": "exclusive_write", 00:10:58.332 "zoned": false, 00:10:58.332 "supported_io_types": { 00:10:58.332 "read": true, 00:10:58.332 "write": true, 00:10:58.332 "unmap": true, 00:10:58.332 "flush": true, 00:10:58.332 "reset": true, 00:10:58.332 "nvme_admin": false, 00:10:58.332 "nvme_io": false, 00:10:58.332 "nvme_io_md": false, 00:10:58.332 "write_zeroes": true, 00:10:58.332 "zcopy": true, 00:10:58.332 "get_zone_info": false, 00:10:58.332 "zone_management": false, 00:10:58.332 "zone_append": false, 00:10:58.332 "compare": false, 00:10:58.332 "compare_and_write": false, 00:10:58.332 "abort": true, 00:10:58.332 "seek_hole": false, 00:10:58.332 "seek_data": false, 00:10:58.333 "copy": true, 00:10:58.333 "nvme_iov_md": false 00:10:58.333 }, 00:10:58.333 "memory_domains": [ 00:10:58.333 { 00:10:58.333 "dma_device_id": "system", 00:10:58.333 "dma_device_type": 1 00:10:58.333 }, 00:10:58.333 { 00:10:58.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.333 "dma_device_type": 2 00:10:58.333 } 00:10:58.333 ], 00:10:58.333 "driver_specific": {} 00:10:58.333 } 00:10:58.333 ] 00:10:58.333 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.333 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:58.333 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.333 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.333 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.333 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.333 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.333 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.333 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.333 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.333 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.333 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.593 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.593 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.593 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.593 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.593 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.593 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.593 "name": "Existed_Raid", 00:10:58.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.593 "strip_size_kb": 64, 00:10:58.593 "state": "configuring", 00:10:58.593 "raid_level": "concat", 00:10:58.593 "superblock": false, 00:10:58.593 "num_base_bdevs": 4, 00:10:58.593 "num_base_bdevs_discovered": 1, 00:10:58.593 "num_base_bdevs_operational": 4, 00:10:58.593 "base_bdevs_list": [ 00:10:58.593 { 00:10:58.593 "name": "BaseBdev1", 00:10:58.593 "uuid": "88a85296-bbcf-47f9-b7e8-a43ee9bfc72a", 00:10:58.593 "is_configured": true, 00:10:58.593 "data_offset": 0, 00:10:58.593 "data_size": 65536 00:10:58.593 }, 00:10:58.593 { 00:10:58.593 "name": "BaseBdev2", 00:10:58.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.593 "is_configured": false, 00:10:58.593 "data_offset": 0, 00:10:58.593 "data_size": 0 00:10:58.593 }, 00:10:58.593 { 00:10:58.593 "name": "BaseBdev3", 00:10:58.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.593 "is_configured": false, 00:10:58.593 "data_offset": 0, 00:10:58.593 "data_size": 0 00:10:58.593 }, 00:10:58.593 { 00:10:58.593 "name": "BaseBdev4", 00:10:58.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.593 "is_configured": false, 00:10:58.593 "data_offset": 0, 00:10:58.593 "data_size": 0 00:10:58.593 } 00:10:58.593 ] 00:10:58.593 }' 00:10:58.593 08:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.593 08:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.853 [2024-10-05 08:47:35.243984] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:58.853 [2024-10-05 08:47:35.244033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.853 [2024-10-05 08:47:35.256016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.853 [2024-10-05 08:47:35.258076] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:58.853 [2024-10-05 08:47:35.258114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:58.853 [2024-10-05 08:47:35.258125] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:58.853 [2024-10-05 08:47:35.258136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:58.853 [2024-10-05 08:47:35.258142] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:58.853 [2024-10-05 08:47:35.258151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.853 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.853 "name": "Existed_Raid", 00:10:58.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.853 "strip_size_kb": 64, 00:10:58.853 "state": "configuring", 00:10:58.853 "raid_level": "concat", 00:10:58.853 "superblock": false, 00:10:58.853 "num_base_bdevs": 4, 00:10:58.853 "num_base_bdevs_discovered": 1, 00:10:58.853 "num_base_bdevs_operational": 4, 00:10:58.853 "base_bdevs_list": [ 00:10:58.853 { 00:10:58.853 "name": "BaseBdev1", 00:10:58.853 "uuid": "88a85296-bbcf-47f9-b7e8-a43ee9bfc72a", 00:10:58.853 "is_configured": true, 00:10:58.853 "data_offset": 0, 00:10:58.853 "data_size": 65536 00:10:58.853 }, 00:10:58.853 { 00:10:58.853 "name": "BaseBdev2", 00:10:58.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.853 "is_configured": false, 00:10:58.853 "data_offset": 0, 00:10:58.853 "data_size": 0 00:10:58.853 }, 00:10:58.853 { 00:10:58.853 "name": "BaseBdev3", 00:10:58.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.853 "is_configured": false, 00:10:58.853 "data_offset": 0, 00:10:58.853 "data_size": 0 00:10:58.853 }, 00:10:58.853 { 00:10:58.853 "name": "BaseBdev4", 00:10:58.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.854 "is_configured": false, 00:10:58.854 "data_offset": 0, 00:10:58.854 "data_size": 0 00:10:58.854 } 00:10:58.854 ] 00:10:58.854 }' 00:10:58.854 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.854 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.424 [2024-10-05 08:47:35.691429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.424 BaseBdev2 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.424 [ 00:10:59.424 { 00:10:59.424 "name": "BaseBdev2", 00:10:59.424 "aliases": [ 00:10:59.424 "5ad1ed0b-59b4-4bb9-a41c-d149e5088066" 00:10:59.424 ], 00:10:59.424 "product_name": "Malloc disk", 00:10:59.424 "block_size": 512, 00:10:59.424 "num_blocks": 65536, 00:10:59.424 "uuid": "5ad1ed0b-59b4-4bb9-a41c-d149e5088066", 00:10:59.424 "assigned_rate_limits": { 00:10:59.424 "rw_ios_per_sec": 0, 00:10:59.424 "rw_mbytes_per_sec": 0, 00:10:59.424 "r_mbytes_per_sec": 0, 00:10:59.424 "w_mbytes_per_sec": 0 00:10:59.424 }, 00:10:59.424 "claimed": true, 00:10:59.424 "claim_type": "exclusive_write", 00:10:59.424 "zoned": false, 00:10:59.424 "supported_io_types": { 00:10:59.424 "read": true, 00:10:59.424 "write": true, 00:10:59.424 "unmap": true, 00:10:59.424 "flush": true, 00:10:59.424 "reset": true, 00:10:59.424 "nvme_admin": false, 00:10:59.424 "nvme_io": false, 00:10:59.424 "nvme_io_md": false, 00:10:59.424 "write_zeroes": true, 00:10:59.424 "zcopy": true, 00:10:59.424 "get_zone_info": false, 00:10:59.424 "zone_management": false, 00:10:59.424 "zone_append": false, 00:10:59.424 "compare": false, 00:10:59.424 "compare_and_write": false, 00:10:59.424 "abort": true, 00:10:59.424 "seek_hole": false, 00:10:59.424 "seek_data": false, 00:10:59.424 "copy": true, 00:10:59.424 "nvme_iov_md": false 00:10:59.424 }, 00:10:59.424 "memory_domains": [ 00:10:59.424 { 00:10:59.424 "dma_device_id": "system", 00:10:59.424 "dma_device_type": 1 00:10:59.424 }, 00:10:59.424 { 00:10:59.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.424 "dma_device_type": 2 00:10:59.424 } 00:10:59.424 ], 00:10:59.424 "driver_specific": {} 00:10:59.424 } 00:10:59.424 ] 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.424 "name": "Existed_Raid", 00:10:59.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.424 "strip_size_kb": 64, 00:10:59.424 "state": "configuring", 00:10:59.424 "raid_level": "concat", 00:10:59.424 "superblock": false, 00:10:59.424 "num_base_bdevs": 4, 00:10:59.424 "num_base_bdevs_discovered": 2, 00:10:59.424 "num_base_bdevs_operational": 4, 00:10:59.424 "base_bdevs_list": [ 00:10:59.424 { 00:10:59.424 "name": "BaseBdev1", 00:10:59.424 "uuid": "88a85296-bbcf-47f9-b7e8-a43ee9bfc72a", 00:10:59.424 "is_configured": true, 00:10:59.424 "data_offset": 0, 00:10:59.424 "data_size": 65536 00:10:59.424 }, 00:10:59.424 { 00:10:59.424 "name": "BaseBdev2", 00:10:59.424 "uuid": "5ad1ed0b-59b4-4bb9-a41c-d149e5088066", 00:10:59.424 "is_configured": true, 00:10:59.424 "data_offset": 0, 00:10:59.424 "data_size": 65536 00:10:59.424 }, 00:10:59.424 { 00:10:59.424 "name": "BaseBdev3", 00:10:59.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.424 "is_configured": false, 00:10:59.424 "data_offset": 0, 00:10:59.424 "data_size": 0 00:10:59.424 }, 00:10:59.424 { 00:10:59.424 "name": "BaseBdev4", 00:10:59.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.424 "is_configured": false, 00:10:59.424 "data_offset": 0, 00:10:59.424 "data_size": 0 00:10:59.424 } 00:10:59.424 ] 00:10:59.424 }' 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.424 08:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.994 [2024-10-05 08:47:36.246463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.994 BaseBdev3 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.994 [ 00:10:59.994 { 00:10:59.994 "name": "BaseBdev3", 00:10:59.994 "aliases": [ 00:10:59.994 "dfd8fac8-9f03-4bfe-91e6-47d12b6949ea" 00:10:59.994 ], 00:10:59.994 "product_name": "Malloc disk", 00:10:59.994 "block_size": 512, 00:10:59.994 "num_blocks": 65536, 00:10:59.994 "uuid": "dfd8fac8-9f03-4bfe-91e6-47d12b6949ea", 00:10:59.994 "assigned_rate_limits": { 00:10:59.994 "rw_ios_per_sec": 0, 00:10:59.994 "rw_mbytes_per_sec": 0, 00:10:59.994 "r_mbytes_per_sec": 0, 00:10:59.994 "w_mbytes_per_sec": 0 00:10:59.994 }, 00:10:59.994 "claimed": true, 00:10:59.994 "claim_type": "exclusive_write", 00:10:59.994 "zoned": false, 00:10:59.994 "supported_io_types": { 00:10:59.994 "read": true, 00:10:59.994 "write": true, 00:10:59.994 "unmap": true, 00:10:59.994 "flush": true, 00:10:59.994 "reset": true, 00:10:59.994 "nvme_admin": false, 00:10:59.994 "nvme_io": false, 00:10:59.994 "nvme_io_md": false, 00:10:59.994 "write_zeroes": true, 00:10:59.994 "zcopy": true, 00:10:59.994 "get_zone_info": false, 00:10:59.994 "zone_management": false, 00:10:59.994 "zone_append": false, 00:10:59.994 "compare": false, 00:10:59.994 "compare_and_write": false, 00:10:59.994 "abort": true, 00:10:59.994 "seek_hole": false, 00:10:59.994 "seek_data": false, 00:10:59.994 "copy": true, 00:10:59.994 "nvme_iov_md": false 00:10:59.994 }, 00:10:59.994 "memory_domains": [ 00:10:59.994 { 00:10:59.994 "dma_device_id": "system", 00:10:59.994 "dma_device_type": 1 00:10:59.994 }, 00:10:59.994 { 00:10:59.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.994 "dma_device_type": 2 00:10:59.994 } 00:10:59.994 ], 00:10:59.994 "driver_specific": {} 00:10:59.994 } 00:10:59.994 ] 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.994 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.995 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.995 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.995 "name": "Existed_Raid", 00:10:59.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.995 "strip_size_kb": 64, 00:10:59.995 "state": "configuring", 00:10:59.995 "raid_level": "concat", 00:10:59.995 "superblock": false, 00:10:59.995 "num_base_bdevs": 4, 00:10:59.995 "num_base_bdevs_discovered": 3, 00:10:59.995 "num_base_bdevs_operational": 4, 00:10:59.995 "base_bdevs_list": [ 00:10:59.995 { 00:10:59.995 "name": "BaseBdev1", 00:10:59.995 "uuid": "88a85296-bbcf-47f9-b7e8-a43ee9bfc72a", 00:10:59.995 "is_configured": true, 00:10:59.995 "data_offset": 0, 00:10:59.995 "data_size": 65536 00:10:59.995 }, 00:10:59.995 { 00:10:59.995 "name": "BaseBdev2", 00:10:59.995 "uuid": "5ad1ed0b-59b4-4bb9-a41c-d149e5088066", 00:10:59.995 "is_configured": true, 00:10:59.995 "data_offset": 0, 00:10:59.995 "data_size": 65536 00:10:59.995 }, 00:10:59.995 { 00:10:59.995 "name": "BaseBdev3", 00:10:59.995 "uuid": "dfd8fac8-9f03-4bfe-91e6-47d12b6949ea", 00:10:59.995 "is_configured": true, 00:10:59.995 "data_offset": 0, 00:10:59.995 "data_size": 65536 00:10:59.995 }, 00:10:59.995 { 00:10:59.995 "name": "BaseBdev4", 00:10:59.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.995 "is_configured": false, 00:10:59.995 "data_offset": 0, 00:10:59.995 "data_size": 0 00:10:59.995 } 00:10:59.995 ] 00:10:59.995 }' 00:10:59.995 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.995 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.254 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:00.254 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.254 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.515 [2024-10-05 08:47:36.750823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:00.515 [2024-10-05 08:47:36.750878] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:00.515 [2024-10-05 08:47:36.750887] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:00.515 [2024-10-05 08:47:36.751229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:00.515 [2024-10-05 08:47:36.751415] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:00.515 [2024-10-05 08:47:36.751434] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:00.515 [2024-10-05 08:47:36.751709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.515 BaseBdev4 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.515 [ 00:11:00.515 { 00:11:00.515 "name": "BaseBdev4", 00:11:00.515 "aliases": [ 00:11:00.515 "b3649a8a-255d-4697-841f-e2472c1bd208" 00:11:00.515 ], 00:11:00.515 "product_name": "Malloc disk", 00:11:00.515 "block_size": 512, 00:11:00.515 "num_blocks": 65536, 00:11:00.515 "uuid": "b3649a8a-255d-4697-841f-e2472c1bd208", 00:11:00.515 "assigned_rate_limits": { 00:11:00.515 "rw_ios_per_sec": 0, 00:11:00.515 "rw_mbytes_per_sec": 0, 00:11:00.515 "r_mbytes_per_sec": 0, 00:11:00.515 "w_mbytes_per_sec": 0 00:11:00.515 }, 00:11:00.515 "claimed": true, 00:11:00.515 "claim_type": "exclusive_write", 00:11:00.515 "zoned": false, 00:11:00.515 "supported_io_types": { 00:11:00.515 "read": true, 00:11:00.515 "write": true, 00:11:00.515 "unmap": true, 00:11:00.515 "flush": true, 00:11:00.515 "reset": true, 00:11:00.515 "nvme_admin": false, 00:11:00.515 "nvme_io": false, 00:11:00.515 "nvme_io_md": false, 00:11:00.515 "write_zeroes": true, 00:11:00.515 "zcopy": true, 00:11:00.515 "get_zone_info": false, 00:11:00.515 "zone_management": false, 00:11:00.515 "zone_append": false, 00:11:00.515 "compare": false, 00:11:00.515 "compare_and_write": false, 00:11:00.515 "abort": true, 00:11:00.515 "seek_hole": false, 00:11:00.515 "seek_data": false, 00:11:00.515 "copy": true, 00:11:00.515 "nvme_iov_md": false 00:11:00.515 }, 00:11:00.515 "memory_domains": [ 00:11:00.515 { 00:11:00.515 "dma_device_id": "system", 00:11:00.515 "dma_device_type": 1 00:11:00.515 }, 00:11:00.515 { 00:11:00.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.515 "dma_device_type": 2 00:11:00.515 } 00:11:00.515 ], 00:11:00.515 "driver_specific": {} 00:11:00.515 } 00:11:00.515 ] 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:00.515 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.516 "name": "Existed_Raid", 00:11:00.516 "uuid": "f5fa4a21-5161-4f7b-802f-657aacd6de19", 00:11:00.516 "strip_size_kb": 64, 00:11:00.516 "state": "online", 00:11:00.516 "raid_level": "concat", 00:11:00.516 "superblock": false, 00:11:00.516 "num_base_bdevs": 4, 00:11:00.516 "num_base_bdevs_discovered": 4, 00:11:00.516 "num_base_bdevs_operational": 4, 00:11:00.516 "base_bdevs_list": [ 00:11:00.516 { 00:11:00.516 "name": "BaseBdev1", 00:11:00.516 "uuid": "88a85296-bbcf-47f9-b7e8-a43ee9bfc72a", 00:11:00.516 "is_configured": true, 00:11:00.516 "data_offset": 0, 00:11:00.516 "data_size": 65536 00:11:00.516 }, 00:11:00.516 { 00:11:00.516 "name": "BaseBdev2", 00:11:00.516 "uuid": "5ad1ed0b-59b4-4bb9-a41c-d149e5088066", 00:11:00.516 "is_configured": true, 00:11:00.516 "data_offset": 0, 00:11:00.516 "data_size": 65536 00:11:00.516 }, 00:11:00.516 { 00:11:00.516 "name": "BaseBdev3", 00:11:00.516 "uuid": "dfd8fac8-9f03-4bfe-91e6-47d12b6949ea", 00:11:00.516 "is_configured": true, 00:11:00.516 "data_offset": 0, 00:11:00.516 "data_size": 65536 00:11:00.516 }, 00:11:00.516 { 00:11:00.516 "name": "BaseBdev4", 00:11:00.516 "uuid": "b3649a8a-255d-4697-841f-e2472c1bd208", 00:11:00.516 "is_configured": true, 00:11:00.516 "data_offset": 0, 00:11:00.516 "data_size": 65536 00:11:00.516 } 00:11:00.516 ] 00:11:00.516 }' 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.516 08:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.777 [2024-10-05 08:47:37.198371] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:00.777 "name": "Existed_Raid", 00:11:00.777 "aliases": [ 00:11:00.777 "f5fa4a21-5161-4f7b-802f-657aacd6de19" 00:11:00.777 ], 00:11:00.777 "product_name": "Raid Volume", 00:11:00.777 "block_size": 512, 00:11:00.777 "num_blocks": 262144, 00:11:00.777 "uuid": "f5fa4a21-5161-4f7b-802f-657aacd6de19", 00:11:00.777 "assigned_rate_limits": { 00:11:00.777 "rw_ios_per_sec": 0, 00:11:00.777 "rw_mbytes_per_sec": 0, 00:11:00.777 "r_mbytes_per_sec": 0, 00:11:00.777 "w_mbytes_per_sec": 0 00:11:00.777 }, 00:11:00.777 "claimed": false, 00:11:00.777 "zoned": false, 00:11:00.777 "supported_io_types": { 00:11:00.777 "read": true, 00:11:00.777 "write": true, 00:11:00.777 "unmap": true, 00:11:00.777 "flush": true, 00:11:00.777 "reset": true, 00:11:00.777 "nvme_admin": false, 00:11:00.777 "nvme_io": false, 00:11:00.777 "nvme_io_md": false, 00:11:00.777 "write_zeroes": true, 00:11:00.777 "zcopy": false, 00:11:00.777 "get_zone_info": false, 00:11:00.777 "zone_management": false, 00:11:00.777 "zone_append": false, 00:11:00.777 "compare": false, 00:11:00.777 "compare_and_write": false, 00:11:00.777 "abort": false, 00:11:00.777 "seek_hole": false, 00:11:00.777 "seek_data": false, 00:11:00.777 "copy": false, 00:11:00.777 "nvme_iov_md": false 00:11:00.777 }, 00:11:00.777 "memory_domains": [ 00:11:00.777 { 00:11:00.777 "dma_device_id": "system", 00:11:00.777 "dma_device_type": 1 00:11:00.777 }, 00:11:00.777 { 00:11:00.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.777 "dma_device_type": 2 00:11:00.777 }, 00:11:00.777 { 00:11:00.777 "dma_device_id": "system", 00:11:00.777 "dma_device_type": 1 00:11:00.777 }, 00:11:00.777 { 00:11:00.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.777 "dma_device_type": 2 00:11:00.777 }, 00:11:00.777 { 00:11:00.777 "dma_device_id": "system", 00:11:00.777 "dma_device_type": 1 00:11:00.777 }, 00:11:00.777 { 00:11:00.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.777 "dma_device_type": 2 00:11:00.777 }, 00:11:00.777 { 00:11:00.777 "dma_device_id": "system", 00:11:00.777 "dma_device_type": 1 00:11:00.777 }, 00:11:00.777 { 00:11:00.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.777 "dma_device_type": 2 00:11:00.777 } 00:11:00.777 ], 00:11:00.777 "driver_specific": { 00:11:00.777 "raid": { 00:11:00.777 "uuid": "f5fa4a21-5161-4f7b-802f-657aacd6de19", 00:11:00.777 "strip_size_kb": 64, 00:11:00.777 "state": "online", 00:11:00.777 "raid_level": "concat", 00:11:00.777 "superblock": false, 00:11:00.777 "num_base_bdevs": 4, 00:11:00.777 "num_base_bdevs_discovered": 4, 00:11:00.777 "num_base_bdevs_operational": 4, 00:11:00.777 "base_bdevs_list": [ 00:11:00.777 { 00:11:00.777 "name": "BaseBdev1", 00:11:00.777 "uuid": "88a85296-bbcf-47f9-b7e8-a43ee9bfc72a", 00:11:00.777 "is_configured": true, 00:11:00.777 "data_offset": 0, 00:11:00.777 "data_size": 65536 00:11:00.777 }, 00:11:00.777 { 00:11:00.777 "name": "BaseBdev2", 00:11:00.777 "uuid": "5ad1ed0b-59b4-4bb9-a41c-d149e5088066", 00:11:00.777 "is_configured": true, 00:11:00.777 "data_offset": 0, 00:11:00.777 "data_size": 65536 00:11:00.777 }, 00:11:00.777 { 00:11:00.777 "name": "BaseBdev3", 00:11:00.777 "uuid": "dfd8fac8-9f03-4bfe-91e6-47d12b6949ea", 00:11:00.777 "is_configured": true, 00:11:00.777 "data_offset": 0, 00:11:00.777 "data_size": 65536 00:11:00.777 }, 00:11:00.777 { 00:11:00.777 "name": "BaseBdev4", 00:11:00.777 "uuid": "b3649a8a-255d-4697-841f-e2472c1bd208", 00:11:00.777 "is_configured": true, 00:11:00.777 "data_offset": 0, 00:11:00.777 "data_size": 65536 00:11:00.777 } 00:11:00.777 ] 00:11:00.777 } 00:11:00.777 } 00:11:00.777 }' 00:11:00.777 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:01.037 BaseBdev2 00:11:01.037 BaseBdev3 00:11:01.037 BaseBdev4' 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:01.037 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.038 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.038 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.038 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.038 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.038 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.038 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:01.038 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.038 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.038 [2024-10-05 08:47:37.505591] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:01.038 [2024-10-05 08:47:37.505622] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:01.038 [2024-10-05 08:47:37.505675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.298 "name": "Existed_Raid", 00:11:01.298 "uuid": "f5fa4a21-5161-4f7b-802f-657aacd6de19", 00:11:01.298 "strip_size_kb": 64, 00:11:01.298 "state": "offline", 00:11:01.298 "raid_level": "concat", 00:11:01.298 "superblock": false, 00:11:01.298 "num_base_bdevs": 4, 00:11:01.298 "num_base_bdevs_discovered": 3, 00:11:01.298 "num_base_bdevs_operational": 3, 00:11:01.298 "base_bdevs_list": [ 00:11:01.298 { 00:11:01.298 "name": null, 00:11:01.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.298 "is_configured": false, 00:11:01.298 "data_offset": 0, 00:11:01.298 "data_size": 65536 00:11:01.298 }, 00:11:01.298 { 00:11:01.298 "name": "BaseBdev2", 00:11:01.298 "uuid": "5ad1ed0b-59b4-4bb9-a41c-d149e5088066", 00:11:01.298 "is_configured": true, 00:11:01.298 "data_offset": 0, 00:11:01.298 "data_size": 65536 00:11:01.298 }, 00:11:01.298 { 00:11:01.298 "name": "BaseBdev3", 00:11:01.298 "uuid": "dfd8fac8-9f03-4bfe-91e6-47d12b6949ea", 00:11:01.298 "is_configured": true, 00:11:01.298 "data_offset": 0, 00:11:01.298 "data_size": 65536 00:11:01.298 }, 00:11:01.298 { 00:11:01.298 "name": "BaseBdev4", 00:11:01.298 "uuid": "b3649a8a-255d-4697-841f-e2472c1bd208", 00:11:01.298 "is_configured": true, 00:11:01.298 "data_offset": 0, 00:11:01.298 "data_size": 65536 00:11:01.298 } 00:11:01.298 ] 00:11:01.298 }' 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.298 08:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.558 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:01.558 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.818 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.818 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.818 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.818 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.818 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.819 [2024-10-05 08:47:38.081629] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.819 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.819 [2024-10-05 08:47:38.220651] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.078 [2024-10-05 08:47:38.368441] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:02.078 [2024-10-05 08:47:38.368502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.078 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.339 BaseBdev2 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.339 [ 00:11:02.339 { 00:11:02.339 "name": "BaseBdev2", 00:11:02.339 "aliases": [ 00:11:02.339 "27b8aa49-0ee1-4d31-b821-37b887397662" 00:11:02.339 ], 00:11:02.339 "product_name": "Malloc disk", 00:11:02.339 "block_size": 512, 00:11:02.339 "num_blocks": 65536, 00:11:02.339 "uuid": "27b8aa49-0ee1-4d31-b821-37b887397662", 00:11:02.339 "assigned_rate_limits": { 00:11:02.339 "rw_ios_per_sec": 0, 00:11:02.339 "rw_mbytes_per_sec": 0, 00:11:02.339 "r_mbytes_per_sec": 0, 00:11:02.339 "w_mbytes_per_sec": 0 00:11:02.339 }, 00:11:02.339 "claimed": false, 00:11:02.339 "zoned": false, 00:11:02.339 "supported_io_types": { 00:11:02.339 "read": true, 00:11:02.339 "write": true, 00:11:02.339 "unmap": true, 00:11:02.339 "flush": true, 00:11:02.339 "reset": true, 00:11:02.339 "nvme_admin": false, 00:11:02.339 "nvme_io": false, 00:11:02.339 "nvme_io_md": false, 00:11:02.339 "write_zeroes": true, 00:11:02.339 "zcopy": true, 00:11:02.339 "get_zone_info": false, 00:11:02.339 "zone_management": false, 00:11:02.339 "zone_append": false, 00:11:02.339 "compare": false, 00:11:02.339 "compare_and_write": false, 00:11:02.339 "abort": true, 00:11:02.339 "seek_hole": false, 00:11:02.339 "seek_data": false, 00:11:02.339 "copy": true, 00:11:02.339 "nvme_iov_md": false 00:11:02.339 }, 00:11:02.339 "memory_domains": [ 00:11:02.339 { 00:11:02.339 "dma_device_id": "system", 00:11:02.339 "dma_device_type": 1 00:11:02.339 }, 00:11:02.339 { 00:11:02.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.339 "dma_device_type": 2 00:11:02.339 } 00:11:02.339 ], 00:11:02.339 "driver_specific": {} 00:11:02.339 } 00:11:02.339 ] 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.339 BaseBdev3 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.339 [ 00:11:02.339 { 00:11:02.339 "name": "BaseBdev3", 00:11:02.339 "aliases": [ 00:11:02.339 "69fe4e8b-c821-49b2-8e8e-c8387f860c0e" 00:11:02.339 ], 00:11:02.339 "product_name": "Malloc disk", 00:11:02.339 "block_size": 512, 00:11:02.339 "num_blocks": 65536, 00:11:02.339 "uuid": "69fe4e8b-c821-49b2-8e8e-c8387f860c0e", 00:11:02.339 "assigned_rate_limits": { 00:11:02.339 "rw_ios_per_sec": 0, 00:11:02.339 "rw_mbytes_per_sec": 0, 00:11:02.339 "r_mbytes_per_sec": 0, 00:11:02.339 "w_mbytes_per_sec": 0 00:11:02.339 }, 00:11:02.339 "claimed": false, 00:11:02.339 "zoned": false, 00:11:02.339 "supported_io_types": { 00:11:02.339 "read": true, 00:11:02.339 "write": true, 00:11:02.339 "unmap": true, 00:11:02.339 "flush": true, 00:11:02.339 "reset": true, 00:11:02.339 "nvme_admin": false, 00:11:02.339 "nvme_io": false, 00:11:02.339 "nvme_io_md": false, 00:11:02.339 "write_zeroes": true, 00:11:02.339 "zcopy": true, 00:11:02.339 "get_zone_info": false, 00:11:02.339 "zone_management": false, 00:11:02.339 "zone_append": false, 00:11:02.339 "compare": false, 00:11:02.339 "compare_and_write": false, 00:11:02.339 "abort": true, 00:11:02.339 "seek_hole": false, 00:11:02.339 "seek_data": false, 00:11:02.339 "copy": true, 00:11:02.339 "nvme_iov_md": false 00:11:02.339 }, 00:11:02.339 "memory_domains": [ 00:11:02.339 { 00:11:02.339 "dma_device_id": "system", 00:11:02.339 "dma_device_type": 1 00:11:02.339 }, 00:11:02.339 { 00:11:02.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.339 "dma_device_type": 2 00:11:02.339 } 00:11:02.339 ], 00:11:02.339 "driver_specific": {} 00:11:02.339 } 00:11:02.339 ] 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.339 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 BaseBdev4 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 [ 00:11:02.340 { 00:11:02.340 "name": "BaseBdev4", 00:11:02.340 "aliases": [ 00:11:02.340 "45246ba8-6b8c-4437-924a-b33085381c0a" 00:11:02.340 ], 00:11:02.340 "product_name": "Malloc disk", 00:11:02.340 "block_size": 512, 00:11:02.340 "num_blocks": 65536, 00:11:02.340 "uuid": "45246ba8-6b8c-4437-924a-b33085381c0a", 00:11:02.340 "assigned_rate_limits": { 00:11:02.340 "rw_ios_per_sec": 0, 00:11:02.340 "rw_mbytes_per_sec": 0, 00:11:02.340 "r_mbytes_per_sec": 0, 00:11:02.340 "w_mbytes_per_sec": 0 00:11:02.340 }, 00:11:02.340 "claimed": false, 00:11:02.340 "zoned": false, 00:11:02.340 "supported_io_types": { 00:11:02.340 "read": true, 00:11:02.340 "write": true, 00:11:02.340 "unmap": true, 00:11:02.340 "flush": true, 00:11:02.340 "reset": true, 00:11:02.340 "nvme_admin": false, 00:11:02.340 "nvme_io": false, 00:11:02.340 "nvme_io_md": false, 00:11:02.340 "write_zeroes": true, 00:11:02.340 "zcopy": true, 00:11:02.340 "get_zone_info": false, 00:11:02.340 "zone_management": false, 00:11:02.340 "zone_append": false, 00:11:02.340 "compare": false, 00:11:02.340 "compare_and_write": false, 00:11:02.340 "abort": true, 00:11:02.340 "seek_hole": false, 00:11:02.340 "seek_data": false, 00:11:02.340 "copy": true, 00:11:02.340 "nvme_iov_md": false 00:11:02.340 }, 00:11:02.340 "memory_domains": [ 00:11:02.340 { 00:11:02.340 "dma_device_id": "system", 00:11:02.340 "dma_device_type": 1 00:11:02.340 }, 00:11:02.340 { 00:11:02.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.340 "dma_device_type": 2 00:11:02.340 } 00:11:02.340 ], 00:11:02.340 "driver_specific": {} 00:11:02.340 } 00:11:02.340 ] 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 [2024-10-05 08:47:38.760714] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.340 [2024-10-05 08:47:38.760850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.340 [2024-10-05 08:47:38.760897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.340 [2024-10-05 08:47:38.763014] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.340 [2024-10-05 08:47:38.763112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.340 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.600 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.600 "name": "Existed_Raid", 00:11:02.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.600 "strip_size_kb": 64, 00:11:02.600 "state": "configuring", 00:11:02.600 "raid_level": "concat", 00:11:02.600 "superblock": false, 00:11:02.600 "num_base_bdevs": 4, 00:11:02.600 "num_base_bdevs_discovered": 3, 00:11:02.600 "num_base_bdevs_operational": 4, 00:11:02.600 "base_bdevs_list": [ 00:11:02.600 { 00:11:02.600 "name": "BaseBdev1", 00:11:02.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.600 "is_configured": false, 00:11:02.600 "data_offset": 0, 00:11:02.600 "data_size": 0 00:11:02.600 }, 00:11:02.600 { 00:11:02.600 "name": "BaseBdev2", 00:11:02.600 "uuid": "27b8aa49-0ee1-4d31-b821-37b887397662", 00:11:02.600 "is_configured": true, 00:11:02.600 "data_offset": 0, 00:11:02.600 "data_size": 65536 00:11:02.600 }, 00:11:02.600 { 00:11:02.600 "name": "BaseBdev3", 00:11:02.600 "uuid": "69fe4e8b-c821-49b2-8e8e-c8387f860c0e", 00:11:02.600 "is_configured": true, 00:11:02.600 "data_offset": 0, 00:11:02.600 "data_size": 65536 00:11:02.600 }, 00:11:02.600 { 00:11:02.600 "name": "BaseBdev4", 00:11:02.600 "uuid": "45246ba8-6b8c-4437-924a-b33085381c0a", 00:11:02.600 "is_configured": true, 00:11:02.600 "data_offset": 0, 00:11:02.600 "data_size": 65536 00:11:02.600 } 00:11:02.600 ] 00:11:02.600 }' 00:11:02.600 08:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.600 08:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.860 [2024-10-05 08:47:39.163994] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.860 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.861 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.861 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.861 "name": "Existed_Raid", 00:11:02.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.861 "strip_size_kb": 64, 00:11:02.861 "state": "configuring", 00:11:02.861 "raid_level": "concat", 00:11:02.861 "superblock": false, 00:11:02.861 "num_base_bdevs": 4, 00:11:02.861 "num_base_bdevs_discovered": 2, 00:11:02.861 "num_base_bdevs_operational": 4, 00:11:02.861 "base_bdevs_list": [ 00:11:02.861 { 00:11:02.861 "name": "BaseBdev1", 00:11:02.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.861 "is_configured": false, 00:11:02.861 "data_offset": 0, 00:11:02.861 "data_size": 0 00:11:02.861 }, 00:11:02.861 { 00:11:02.861 "name": null, 00:11:02.861 "uuid": "27b8aa49-0ee1-4d31-b821-37b887397662", 00:11:02.861 "is_configured": false, 00:11:02.861 "data_offset": 0, 00:11:02.861 "data_size": 65536 00:11:02.861 }, 00:11:02.861 { 00:11:02.861 "name": "BaseBdev3", 00:11:02.861 "uuid": "69fe4e8b-c821-49b2-8e8e-c8387f860c0e", 00:11:02.861 "is_configured": true, 00:11:02.861 "data_offset": 0, 00:11:02.861 "data_size": 65536 00:11:02.861 }, 00:11:02.861 { 00:11:02.861 "name": "BaseBdev4", 00:11:02.861 "uuid": "45246ba8-6b8c-4437-924a-b33085381c0a", 00:11:02.861 "is_configured": true, 00:11:02.861 "data_offset": 0, 00:11:02.861 "data_size": 65536 00:11:02.861 } 00:11:02.861 ] 00:11:02.861 }' 00:11:02.861 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.861 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.431 [2024-10-05 08:47:39.673993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.431 BaseBdev1 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.431 [ 00:11:03.431 { 00:11:03.431 "name": "BaseBdev1", 00:11:03.431 "aliases": [ 00:11:03.431 "302bc3ec-c7c3-4b9b-b6e4-90e1afc87365" 00:11:03.431 ], 00:11:03.431 "product_name": "Malloc disk", 00:11:03.431 "block_size": 512, 00:11:03.431 "num_blocks": 65536, 00:11:03.431 "uuid": "302bc3ec-c7c3-4b9b-b6e4-90e1afc87365", 00:11:03.431 "assigned_rate_limits": { 00:11:03.431 "rw_ios_per_sec": 0, 00:11:03.431 "rw_mbytes_per_sec": 0, 00:11:03.431 "r_mbytes_per_sec": 0, 00:11:03.431 "w_mbytes_per_sec": 0 00:11:03.431 }, 00:11:03.431 "claimed": true, 00:11:03.431 "claim_type": "exclusive_write", 00:11:03.431 "zoned": false, 00:11:03.431 "supported_io_types": { 00:11:03.431 "read": true, 00:11:03.431 "write": true, 00:11:03.431 "unmap": true, 00:11:03.431 "flush": true, 00:11:03.431 "reset": true, 00:11:03.431 "nvme_admin": false, 00:11:03.431 "nvme_io": false, 00:11:03.431 "nvme_io_md": false, 00:11:03.431 "write_zeroes": true, 00:11:03.431 "zcopy": true, 00:11:03.431 "get_zone_info": false, 00:11:03.431 "zone_management": false, 00:11:03.431 "zone_append": false, 00:11:03.431 "compare": false, 00:11:03.431 "compare_and_write": false, 00:11:03.431 "abort": true, 00:11:03.431 "seek_hole": false, 00:11:03.431 "seek_data": false, 00:11:03.431 "copy": true, 00:11:03.431 "nvme_iov_md": false 00:11:03.431 }, 00:11:03.431 "memory_domains": [ 00:11:03.431 { 00:11:03.431 "dma_device_id": "system", 00:11:03.431 "dma_device_type": 1 00:11:03.431 }, 00:11:03.431 { 00:11:03.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.431 "dma_device_type": 2 00:11:03.431 } 00:11:03.431 ], 00:11:03.431 "driver_specific": {} 00:11:03.431 } 00:11:03.431 ] 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.431 "name": "Existed_Raid", 00:11:03.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.431 "strip_size_kb": 64, 00:11:03.431 "state": "configuring", 00:11:03.431 "raid_level": "concat", 00:11:03.431 "superblock": false, 00:11:03.431 "num_base_bdevs": 4, 00:11:03.431 "num_base_bdevs_discovered": 3, 00:11:03.431 "num_base_bdevs_operational": 4, 00:11:03.431 "base_bdevs_list": [ 00:11:03.431 { 00:11:03.431 "name": "BaseBdev1", 00:11:03.431 "uuid": "302bc3ec-c7c3-4b9b-b6e4-90e1afc87365", 00:11:03.431 "is_configured": true, 00:11:03.431 "data_offset": 0, 00:11:03.431 "data_size": 65536 00:11:03.431 }, 00:11:03.431 { 00:11:03.431 "name": null, 00:11:03.431 "uuid": "27b8aa49-0ee1-4d31-b821-37b887397662", 00:11:03.431 "is_configured": false, 00:11:03.431 "data_offset": 0, 00:11:03.431 "data_size": 65536 00:11:03.431 }, 00:11:03.431 { 00:11:03.431 "name": "BaseBdev3", 00:11:03.431 "uuid": "69fe4e8b-c821-49b2-8e8e-c8387f860c0e", 00:11:03.431 "is_configured": true, 00:11:03.431 "data_offset": 0, 00:11:03.431 "data_size": 65536 00:11:03.431 }, 00:11:03.431 { 00:11:03.431 "name": "BaseBdev4", 00:11:03.431 "uuid": "45246ba8-6b8c-4437-924a-b33085381c0a", 00:11:03.431 "is_configured": true, 00:11:03.431 "data_offset": 0, 00:11:03.431 "data_size": 65536 00:11:03.431 } 00:11:03.431 ] 00:11:03.431 }' 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.431 08:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.001 [2024-10-05 08:47:40.229078] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.001 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.002 "name": "Existed_Raid", 00:11:04.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.002 "strip_size_kb": 64, 00:11:04.002 "state": "configuring", 00:11:04.002 "raid_level": "concat", 00:11:04.002 "superblock": false, 00:11:04.002 "num_base_bdevs": 4, 00:11:04.002 "num_base_bdevs_discovered": 2, 00:11:04.002 "num_base_bdevs_operational": 4, 00:11:04.002 "base_bdevs_list": [ 00:11:04.002 { 00:11:04.002 "name": "BaseBdev1", 00:11:04.002 "uuid": "302bc3ec-c7c3-4b9b-b6e4-90e1afc87365", 00:11:04.002 "is_configured": true, 00:11:04.002 "data_offset": 0, 00:11:04.002 "data_size": 65536 00:11:04.002 }, 00:11:04.002 { 00:11:04.002 "name": null, 00:11:04.002 "uuid": "27b8aa49-0ee1-4d31-b821-37b887397662", 00:11:04.002 "is_configured": false, 00:11:04.002 "data_offset": 0, 00:11:04.002 "data_size": 65536 00:11:04.002 }, 00:11:04.002 { 00:11:04.002 "name": null, 00:11:04.002 "uuid": "69fe4e8b-c821-49b2-8e8e-c8387f860c0e", 00:11:04.002 "is_configured": false, 00:11:04.002 "data_offset": 0, 00:11:04.002 "data_size": 65536 00:11:04.002 }, 00:11:04.002 { 00:11:04.002 "name": "BaseBdev4", 00:11:04.002 "uuid": "45246ba8-6b8c-4437-924a-b33085381c0a", 00:11:04.002 "is_configured": true, 00:11:04.002 "data_offset": 0, 00:11:04.002 "data_size": 65536 00:11:04.002 } 00:11:04.002 ] 00:11:04.002 }' 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.002 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.290 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.290 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.290 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.290 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:04.290 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.290 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:04.290 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:04.290 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.290 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.290 [2024-10-05 08:47:40.716273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.290 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.290 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.290 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.291 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.291 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.291 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.291 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.291 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.291 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.291 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.291 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.291 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.291 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.291 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.291 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.550 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.550 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.550 "name": "Existed_Raid", 00:11:04.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.550 "strip_size_kb": 64, 00:11:04.550 "state": "configuring", 00:11:04.550 "raid_level": "concat", 00:11:04.550 "superblock": false, 00:11:04.550 "num_base_bdevs": 4, 00:11:04.550 "num_base_bdevs_discovered": 3, 00:11:04.550 "num_base_bdevs_operational": 4, 00:11:04.550 "base_bdevs_list": [ 00:11:04.550 { 00:11:04.550 "name": "BaseBdev1", 00:11:04.550 "uuid": "302bc3ec-c7c3-4b9b-b6e4-90e1afc87365", 00:11:04.550 "is_configured": true, 00:11:04.551 "data_offset": 0, 00:11:04.551 "data_size": 65536 00:11:04.551 }, 00:11:04.551 { 00:11:04.551 "name": null, 00:11:04.551 "uuid": "27b8aa49-0ee1-4d31-b821-37b887397662", 00:11:04.551 "is_configured": false, 00:11:04.551 "data_offset": 0, 00:11:04.551 "data_size": 65536 00:11:04.551 }, 00:11:04.551 { 00:11:04.551 "name": "BaseBdev3", 00:11:04.551 "uuid": "69fe4e8b-c821-49b2-8e8e-c8387f860c0e", 00:11:04.551 "is_configured": true, 00:11:04.551 "data_offset": 0, 00:11:04.551 "data_size": 65536 00:11:04.551 }, 00:11:04.551 { 00:11:04.551 "name": "BaseBdev4", 00:11:04.551 "uuid": "45246ba8-6b8c-4437-924a-b33085381c0a", 00:11:04.551 "is_configured": true, 00:11:04.551 "data_offset": 0, 00:11:04.551 "data_size": 65536 00:11:04.551 } 00:11:04.551 ] 00:11:04.551 }' 00:11:04.551 08:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.551 08:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.810 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.810 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:04.810 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.810 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.810 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.810 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:04.810 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:04.810 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.810 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.810 [2024-10-05 08:47:41.151551] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.811 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.070 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.070 "name": "Existed_Raid", 00:11:05.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.070 "strip_size_kb": 64, 00:11:05.070 "state": "configuring", 00:11:05.070 "raid_level": "concat", 00:11:05.070 "superblock": false, 00:11:05.070 "num_base_bdevs": 4, 00:11:05.070 "num_base_bdevs_discovered": 2, 00:11:05.070 "num_base_bdevs_operational": 4, 00:11:05.070 "base_bdevs_list": [ 00:11:05.070 { 00:11:05.070 "name": null, 00:11:05.070 "uuid": "302bc3ec-c7c3-4b9b-b6e4-90e1afc87365", 00:11:05.070 "is_configured": false, 00:11:05.070 "data_offset": 0, 00:11:05.070 "data_size": 65536 00:11:05.070 }, 00:11:05.070 { 00:11:05.070 "name": null, 00:11:05.070 "uuid": "27b8aa49-0ee1-4d31-b821-37b887397662", 00:11:05.070 "is_configured": false, 00:11:05.070 "data_offset": 0, 00:11:05.070 "data_size": 65536 00:11:05.070 }, 00:11:05.070 { 00:11:05.070 "name": "BaseBdev3", 00:11:05.070 "uuid": "69fe4e8b-c821-49b2-8e8e-c8387f860c0e", 00:11:05.070 "is_configured": true, 00:11:05.070 "data_offset": 0, 00:11:05.070 "data_size": 65536 00:11:05.070 }, 00:11:05.070 { 00:11:05.070 "name": "BaseBdev4", 00:11:05.070 "uuid": "45246ba8-6b8c-4437-924a-b33085381c0a", 00:11:05.070 "is_configured": true, 00:11:05.070 "data_offset": 0, 00:11:05.070 "data_size": 65536 00:11:05.070 } 00:11:05.070 ] 00:11:05.070 }' 00:11:05.070 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.070 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.330 [2024-10-05 08:47:41.698902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.330 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.330 "name": "Existed_Raid", 00:11:05.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.330 "strip_size_kb": 64, 00:11:05.330 "state": "configuring", 00:11:05.330 "raid_level": "concat", 00:11:05.330 "superblock": false, 00:11:05.330 "num_base_bdevs": 4, 00:11:05.330 "num_base_bdevs_discovered": 3, 00:11:05.330 "num_base_bdevs_operational": 4, 00:11:05.330 "base_bdevs_list": [ 00:11:05.330 { 00:11:05.330 "name": null, 00:11:05.330 "uuid": "302bc3ec-c7c3-4b9b-b6e4-90e1afc87365", 00:11:05.330 "is_configured": false, 00:11:05.330 "data_offset": 0, 00:11:05.330 "data_size": 65536 00:11:05.330 }, 00:11:05.330 { 00:11:05.330 "name": "BaseBdev2", 00:11:05.330 "uuid": "27b8aa49-0ee1-4d31-b821-37b887397662", 00:11:05.330 "is_configured": true, 00:11:05.330 "data_offset": 0, 00:11:05.330 "data_size": 65536 00:11:05.330 }, 00:11:05.330 { 00:11:05.330 "name": "BaseBdev3", 00:11:05.330 "uuid": "69fe4e8b-c821-49b2-8e8e-c8387f860c0e", 00:11:05.330 "is_configured": true, 00:11:05.330 "data_offset": 0, 00:11:05.330 "data_size": 65536 00:11:05.330 }, 00:11:05.330 { 00:11:05.330 "name": "BaseBdev4", 00:11:05.330 "uuid": "45246ba8-6b8c-4437-924a-b33085381c0a", 00:11:05.331 "is_configured": true, 00:11:05.331 "data_offset": 0, 00:11:05.331 "data_size": 65536 00:11:05.331 } 00:11:05.331 ] 00:11:05.331 }' 00:11:05.331 08:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.331 08:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 302bc3ec-c7c3-4b9b-b6e4-90e1afc87365 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.900 [2024-10-05 08:47:42.307848] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:05.900 [2024-10-05 08:47:42.307924] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:05.900 [2024-10-05 08:47:42.307932] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:05.900 [2024-10-05 08:47:42.308237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:05.900 [2024-10-05 08:47:42.308392] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:05.900 [2024-10-05 08:47:42.308405] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:05.900 [2024-10-05 08:47:42.308665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.900 NewBaseBdev 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:05.900 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.901 [ 00:11:05.901 { 00:11:05.901 "name": "NewBaseBdev", 00:11:05.901 "aliases": [ 00:11:05.901 "302bc3ec-c7c3-4b9b-b6e4-90e1afc87365" 00:11:05.901 ], 00:11:05.901 "product_name": "Malloc disk", 00:11:05.901 "block_size": 512, 00:11:05.901 "num_blocks": 65536, 00:11:05.901 "uuid": "302bc3ec-c7c3-4b9b-b6e4-90e1afc87365", 00:11:05.901 "assigned_rate_limits": { 00:11:05.901 "rw_ios_per_sec": 0, 00:11:05.901 "rw_mbytes_per_sec": 0, 00:11:05.901 "r_mbytes_per_sec": 0, 00:11:05.901 "w_mbytes_per_sec": 0 00:11:05.901 }, 00:11:05.901 "claimed": true, 00:11:05.901 "claim_type": "exclusive_write", 00:11:05.901 "zoned": false, 00:11:05.901 "supported_io_types": { 00:11:05.901 "read": true, 00:11:05.901 "write": true, 00:11:05.901 "unmap": true, 00:11:05.901 "flush": true, 00:11:05.901 "reset": true, 00:11:05.901 "nvme_admin": false, 00:11:05.901 "nvme_io": false, 00:11:05.901 "nvme_io_md": false, 00:11:05.901 "write_zeroes": true, 00:11:05.901 "zcopy": true, 00:11:05.901 "get_zone_info": false, 00:11:05.901 "zone_management": false, 00:11:05.901 "zone_append": false, 00:11:05.901 "compare": false, 00:11:05.901 "compare_and_write": false, 00:11:05.901 "abort": true, 00:11:05.901 "seek_hole": false, 00:11:05.901 "seek_data": false, 00:11:05.901 "copy": true, 00:11:05.901 "nvme_iov_md": false 00:11:05.901 }, 00:11:05.901 "memory_domains": [ 00:11:05.901 { 00:11:05.901 "dma_device_id": "system", 00:11:05.901 "dma_device_type": 1 00:11:05.901 }, 00:11:05.901 { 00:11:05.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.901 "dma_device_type": 2 00:11:05.901 } 00:11:05.901 ], 00:11:05.901 "driver_specific": {} 00:11:05.901 } 00:11:05.901 ] 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.901 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.161 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.161 "name": "Existed_Raid", 00:11:06.161 "uuid": "3653e714-675f-45a0-b44c-6cd41df342ae", 00:11:06.161 "strip_size_kb": 64, 00:11:06.161 "state": "online", 00:11:06.161 "raid_level": "concat", 00:11:06.161 "superblock": false, 00:11:06.161 "num_base_bdevs": 4, 00:11:06.161 "num_base_bdevs_discovered": 4, 00:11:06.161 "num_base_bdevs_operational": 4, 00:11:06.161 "base_bdevs_list": [ 00:11:06.161 { 00:11:06.161 "name": "NewBaseBdev", 00:11:06.161 "uuid": "302bc3ec-c7c3-4b9b-b6e4-90e1afc87365", 00:11:06.161 "is_configured": true, 00:11:06.161 "data_offset": 0, 00:11:06.161 "data_size": 65536 00:11:06.161 }, 00:11:06.161 { 00:11:06.161 "name": "BaseBdev2", 00:11:06.161 "uuid": "27b8aa49-0ee1-4d31-b821-37b887397662", 00:11:06.161 "is_configured": true, 00:11:06.161 "data_offset": 0, 00:11:06.161 "data_size": 65536 00:11:06.161 }, 00:11:06.161 { 00:11:06.161 "name": "BaseBdev3", 00:11:06.161 "uuid": "69fe4e8b-c821-49b2-8e8e-c8387f860c0e", 00:11:06.161 "is_configured": true, 00:11:06.161 "data_offset": 0, 00:11:06.161 "data_size": 65536 00:11:06.161 }, 00:11:06.161 { 00:11:06.161 "name": "BaseBdev4", 00:11:06.161 "uuid": "45246ba8-6b8c-4437-924a-b33085381c0a", 00:11:06.161 "is_configured": true, 00:11:06.161 "data_offset": 0, 00:11:06.161 "data_size": 65536 00:11:06.161 } 00:11:06.161 ] 00:11:06.161 }' 00:11:06.161 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.161 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:06.421 [2024-10-05 08:47:42.771438] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:06.421 "name": "Existed_Raid", 00:11:06.421 "aliases": [ 00:11:06.421 "3653e714-675f-45a0-b44c-6cd41df342ae" 00:11:06.421 ], 00:11:06.421 "product_name": "Raid Volume", 00:11:06.421 "block_size": 512, 00:11:06.421 "num_blocks": 262144, 00:11:06.421 "uuid": "3653e714-675f-45a0-b44c-6cd41df342ae", 00:11:06.421 "assigned_rate_limits": { 00:11:06.421 "rw_ios_per_sec": 0, 00:11:06.421 "rw_mbytes_per_sec": 0, 00:11:06.421 "r_mbytes_per_sec": 0, 00:11:06.421 "w_mbytes_per_sec": 0 00:11:06.421 }, 00:11:06.421 "claimed": false, 00:11:06.421 "zoned": false, 00:11:06.421 "supported_io_types": { 00:11:06.421 "read": true, 00:11:06.421 "write": true, 00:11:06.421 "unmap": true, 00:11:06.421 "flush": true, 00:11:06.421 "reset": true, 00:11:06.421 "nvme_admin": false, 00:11:06.421 "nvme_io": false, 00:11:06.421 "nvme_io_md": false, 00:11:06.421 "write_zeroes": true, 00:11:06.421 "zcopy": false, 00:11:06.421 "get_zone_info": false, 00:11:06.421 "zone_management": false, 00:11:06.421 "zone_append": false, 00:11:06.421 "compare": false, 00:11:06.421 "compare_and_write": false, 00:11:06.421 "abort": false, 00:11:06.421 "seek_hole": false, 00:11:06.421 "seek_data": false, 00:11:06.421 "copy": false, 00:11:06.421 "nvme_iov_md": false 00:11:06.421 }, 00:11:06.421 "memory_domains": [ 00:11:06.421 { 00:11:06.421 "dma_device_id": "system", 00:11:06.421 "dma_device_type": 1 00:11:06.421 }, 00:11:06.421 { 00:11:06.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.421 "dma_device_type": 2 00:11:06.421 }, 00:11:06.421 { 00:11:06.421 "dma_device_id": "system", 00:11:06.421 "dma_device_type": 1 00:11:06.421 }, 00:11:06.421 { 00:11:06.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.421 "dma_device_type": 2 00:11:06.421 }, 00:11:06.421 { 00:11:06.421 "dma_device_id": "system", 00:11:06.421 "dma_device_type": 1 00:11:06.421 }, 00:11:06.421 { 00:11:06.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.421 "dma_device_type": 2 00:11:06.421 }, 00:11:06.421 { 00:11:06.421 "dma_device_id": "system", 00:11:06.421 "dma_device_type": 1 00:11:06.421 }, 00:11:06.421 { 00:11:06.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.421 "dma_device_type": 2 00:11:06.421 } 00:11:06.421 ], 00:11:06.421 "driver_specific": { 00:11:06.421 "raid": { 00:11:06.421 "uuid": "3653e714-675f-45a0-b44c-6cd41df342ae", 00:11:06.421 "strip_size_kb": 64, 00:11:06.421 "state": "online", 00:11:06.421 "raid_level": "concat", 00:11:06.421 "superblock": false, 00:11:06.421 "num_base_bdevs": 4, 00:11:06.421 "num_base_bdevs_discovered": 4, 00:11:06.421 "num_base_bdevs_operational": 4, 00:11:06.421 "base_bdevs_list": [ 00:11:06.421 { 00:11:06.421 "name": "NewBaseBdev", 00:11:06.421 "uuid": "302bc3ec-c7c3-4b9b-b6e4-90e1afc87365", 00:11:06.421 "is_configured": true, 00:11:06.421 "data_offset": 0, 00:11:06.421 "data_size": 65536 00:11:06.421 }, 00:11:06.421 { 00:11:06.421 "name": "BaseBdev2", 00:11:06.421 "uuid": "27b8aa49-0ee1-4d31-b821-37b887397662", 00:11:06.421 "is_configured": true, 00:11:06.421 "data_offset": 0, 00:11:06.421 "data_size": 65536 00:11:06.421 }, 00:11:06.421 { 00:11:06.421 "name": "BaseBdev3", 00:11:06.421 "uuid": "69fe4e8b-c821-49b2-8e8e-c8387f860c0e", 00:11:06.421 "is_configured": true, 00:11:06.421 "data_offset": 0, 00:11:06.421 "data_size": 65536 00:11:06.421 }, 00:11:06.421 { 00:11:06.421 "name": "BaseBdev4", 00:11:06.421 "uuid": "45246ba8-6b8c-4437-924a-b33085381c0a", 00:11:06.421 "is_configured": true, 00:11:06.421 "data_offset": 0, 00:11:06.421 "data_size": 65536 00:11:06.421 } 00:11:06.421 ] 00:11:06.421 } 00:11:06.421 } 00:11:06.421 }' 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:06.421 BaseBdev2 00:11:06.421 BaseBdev3 00:11:06.421 BaseBdev4' 00:11:06.421 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.681 08:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.681 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.681 08:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.682 [2024-10-05 08:47:43.086534] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.682 [2024-10-05 08:47:43.086572] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.682 [2024-10-05 08:47:43.086659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.682 [2024-10-05 08:47:43.086747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.682 [2024-10-05 08:47:43.086762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69898 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69898 ']' 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69898 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69898 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:06.682 killing process with pid 69898 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69898' 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69898 00:11:06.682 [2024-10-05 08:47:43.133192] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.682 08:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69898 00:11:07.251 [2024-10-05 08:47:43.545756] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.629 08:47:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:08.630 00:11:08.630 real 0m11.566s 00:11:08.630 user 0m17.894s 00:11:08.630 sys 0m2.228s 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.630 ************************************ 00:11:08.630 END TEST raid_state_function_test 00:11:08.630 ************************************ 00:11:08.630 08:47:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:08.630 08:47:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:08.630 08:47:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.630 08:47:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.630 ************************************ 00:11:08.630 START TEST raid_state_function_test_sb 00:11:08.630 ************************************ 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70504 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70504' 00:11:08.630 Process raid pid: 70504 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70504 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 70504 ']' 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.630 08:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.630 [2024-10-05 08:47:45.055003] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:08.630 [2024-10-05 08:47:45.055116] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.890 [2024-10-05 08:47:45.205637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.150 [2024-10-05 08:47:45.452602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.410 [2024-10-05 08:47:45.687040] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.410 [2024-10-05 08:47:45.687078] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.410 [2024-10-05 08:47:45.866413] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.410 [2024-10-05 08:47:45.866476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.410 [2024-10-05 08:47:45.866486] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.410 [2024-10-05 08:47:45.866498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.410 [2024-10-05 08:47:45.866504] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:09.410 [2024-10-05 08:47:45.866513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:09.410 [2024-10-05 08:47:45.866519] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:09.410 [2024-10-05 08:47:45.866529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.410 08:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.670 08:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.670 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.670 "name": "Existed_Raid", 00:11:09.670 "uuid": "c0ac4e7d-39da-4ca1-a902-ce011431d3eb", 00:11:09.670 "strip_size_kb": 64, 00:11:09.670 "state": "configuring", 00:11:09.670 "raid_level": "concat", 00:11:09.670 "superblock": true, 00:11:09.670 "num_base_bdevs": 4, 00:11:09.670 "num_base_bdevs_discovered": 0, 00:11:09.670 "num_base_bdevs_operational": 4, 00:11:09.670 "base_bdevs_list": [ 00:11:09.670 { 00:11:09.670 "name": "BaseBdev1", 00:11:09.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.670 "is_configured": false, 00:11:09.670 "data_offset": 0, 00:11:09.670 "data_size": 0 00:11:09.670 }, 00:11:09.670 { 00:11:09.670 "name": "BaseBdev2", 00:11:09.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.670 "is_configured": false, 00:11:09.670 "data_offset": 0, 00:11:09.670 "data_size": 0 00:11:09.670 }, 00:11:09.670 { 00:11:09.670 "name": "BaseBdev3", 00:11:09.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.670 "is_configured": false, 00:11:09.670 "data_offset": 0, 00:11:09.670 "data_size": 0 00:11:09.670 }, 00:11:09.670 { 00:11:09.670 "name": "BaseBdev4", 00:11:09.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.670 "is_configured": false, 00:11:09.670 "data_offset": 0, 00:11:09.670 "data_size": 0 00:11:09.670 } 00:11:09.670 ] 00:11:09.670 }' 00:11:09.670 08:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.670 08:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.930 [2024-10-05 08:47:46.297596] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.930 [2024-10-05 08:47:46.297643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.930 [2024-10-05 08:47:46.309612] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.930 [2024-10-05 08:47:46.309671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.930 [2024-10-05 08:47:46.309679] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.930 [2024-10-05 08:47:46.309689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.930 [2024-10-05 08:47:46.309695] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:09.930 [2024-10-05 08:47:46.309704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:09.930 [2024-10-05 08:47:46.309710] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:09.930 [2024-10-05 08:47:46.309718] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.930 [2024-10-05 08:47:46.369537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.930 BaseBdev1 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.930 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.931 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.931 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.931 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:09.931 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.931 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.931 [ 00:11:09.931 { 00:11:09.931 "name": "BaseBdev1", 00:11:09.931 "aliases": [ 00:11:09.931 "7c10c1d1-6917-46e9-b586-388c73949bee" 00:11:09.931 ], 00:11:09.931 "product_name": "Malloc disk", 00:11:09.931 "block_size": 512, 00:11:09.931 "num_blocks": 65536, 00:11:09.931 "uuid": "7c10c1d1-6917-46e9-b586-388c73949bee", 00:11:09.931 "assigned_rate_limits": { 00:11:09.931 "rw_ios_per_sec": 0, 00:11:09.931 "rw_mbytes_per_sec": 0, 00:11:09.931 "r_mbytes_per_sec": 0, 00:11:09.931 "w_mbytes_per_sec": 0 00:11:09.931 }, 00:11:09.931 "claimed": true, 00:11:09.931 "claim_type": "exclusive_write", 00:11:09.931 "zoned": false, 00:11:09.931 "supported_io_types": { 00:11:09.931 "read": true, 00:11:09.931 "write": true, 00:11:09.931 "unmap": true, 00:11:09.931 "flush": true, 00:11:09.931 "reset": true, 00:11:09.931 "nvme_admin": false, 00:11:09.931 "nvme_io": false, 00:11:09.931 "nvme_io_md": false, 00:11:09.931 "write_zeroes": true, 00:11:09.931 "zcopy": true, 00:11:09.931 "get_zone_info": false, 00:11:10.191 "zone_management": false, 00:11:10.191 "zone_append": false, 00:11:10.191 "compare": false, 00:11:10.191 "compare_and_write": false, 00:11:10.191 "abort": true, 00:11:10.191 "seek_hole": false, 00:11:10.191 "seek_data": false, 00:11:10.191 "copy": true, 00:11:10.191 "nvme_iov_md": false 00:11:10.191 }, 00:11:10.191 "memory_domains": [ 00:11:10.191 { 00:11:10.191 "dma_device_id": "system", 00:11:10.191 "dma_device_type": 1 00:11:10.191 }, 00:11:10.191 { 00:11:10.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.191 "dma_device_type": 2 00:11:10.191 } 00:11:10.191 ], 00:11:10.191 "driver_specific": {} 00:11:10.191 } 00:11:10.191 ] 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.191 "name": "Existed_Raid", 00:11:10.191 "uuid": "2be710a5-00f1-4c49-8bf3-c4b19c17ff7d", 00:11:10.191 "strip_size_kb": 64, 00:11:10.191 "state": "configuring", 00:11:10.191 "raid_level": "concat", 00:11:10.191 "superblock": true, 00:11:10.191 "num_base_bdevs": 4, 00:11:10.191 "num_base_bdevs_discovered": 1, 00:11:10.191 "num_base_bdevs_operational": 4, 00:11:10.191 "base_bdevs_list": [ 00:11:10.191 { 00:11:10.191 "name": "BaseBdev1", 00:11:10.191 "uuid": "7c10c1d1-6917-46e9-b586-388c73949bee", 00:11:10.191 "is_configured": true, 00:11:10.191 "data_offset": 2048, 00:11:10.191 "data_size": 63488 00:11:10.191 }, 00:11:10.191 { 00:11:10.191 "name": "BaseBdev2", 00:11:10.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.191 "is_configured": false, 00:11:10.191 "data_offset": 0, 00:11:10.191 "data_size": 0 00:11:10.191 }, 00:11:10.191 { 00:11:10.191 "name": "BaseBdev3", 00:11:10.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.191 "is_configured": false, 00:11:10.191 "data_offset": 0, 00:11:10.191 "data_size": 0 00:11:10.191 }, 00:11:10.191 { 00:11:10.191 "name": "BaseBdev4", 00:11:10.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.191 "is_configured": false, 00:11:10.191 "data_offset": 0, 00:11:10.191 "data_size": 0 00:11:10.191 } 00:11:10.191 ] 00:11:10.191 }' 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.191 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.451 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:10.451 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.451 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.451 [2024-10-05 08:47:46.832754] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.451 [2024-10-05 08:47:46.832811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:10.451 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.452 [2024-10-05 08:47:46.844800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.452 [2024-10-05 08:47:46.846820] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.452 [2024-10-05 08:47:46.846866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.452 [2024-10-05 08:47:46.846876] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.452 [2024-10-05 08:47:46.846887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.452 [2024-10-05 08:47:46.846893] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:10.452 [2024-10-05 08:47:46.846901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.452 "name": "Existed_Raid", 00:11:10.452 "uuid": "6fc00eba-20f6-4b28-8bf7-3f110fe22e4e", 00:11:10.452 "strip_size_kb": 64, 00:11:10.452 "state": "configuring", 00:11:10.452 "raid_level": "concat", 00:11:10.452 "superblock": true, 00:11:10.452 "num_base_bdevs": 4, 00:11:10.452 "num_base_bdevs_discovered": 1, 00:11:10.452 "num_base_bdevs_operational": 4, 00:11:10.452 "base_bdevs_list": [ 00:11:10.452 { 00:11:10.452 "name": "BaseBdev1", 00:11:10.452 "uuid": "7c10c1d1-6917-46e9-b586-388c73949bee", 00:11:10.452 "is_configured": true, 00:11:10.452 "data_offset": 2048, 00:11:10.452 "data_size": 63488 00:11:10.452 }, 00:11:10.452 { 00:11:10.452 "name": "BaseBdev2", 00:11:10.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.452 "is_configured": false, 00:11:10.452 "data_offset": 0, 00:11:10.452 "data_size": 0 00:11:10.452 }, 00:11:10.452 { 00:11:10.452 "name": "BaseBdev3", 00:11:10.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.452 "is_configured": false, 00:11:10.452 "data_offset": 0, 00:11:10.452 "data_size": 0 00:11:10.452 }, 00:11:10.452 { 00:11:10.452 "name": "BaseBdev4", 00:11:10.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.452 "is_configured": false, 00:11:10.452 "data_offset": 0, 00:11:10.452 "data_size": 0 00:11:10.452 } 00:11:10.452 ] 00:11:10.452 }' 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.452 08:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.022 [2024-10-05 08:47:47.340106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:11.022 BaseBdev2 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.022 [ 00:11:11.022 { 00:11:11.022 "name": "BaseBdev2", 00:11:11.022 "aliases": [ 00:11:11.022 "0e0c1153-23b7-4e8a-b7cd-85d0489a99e6" 00:11:11.022 ], 00:11:11.022 "product_name": "Malloc disk", 00:11:11.022 "block_size": 512, 00:11:11.022 "num_blocks": 65536, 00:11:11.022 "uuid": "0e0c1153-23b7-4e8a-b7cd-85d0489a99e6", 00:11:11.022 "assigned_rate_limits": { 00:11:11.022 "rw_ios_per_sec": 0, 00:11:11.022 "rw_mbytes_per_sec": 0, 00:11:11.022 "r_mbytes_per_sec": 0, 00:11:11.022 "w_mbytes_per_sec": 0 00:11:11.022 }, 00:11:11.022 "claimed": true, 00:11:11.022 "claim_type": "exclusive_write", 00:11:11.022 "zoned": false, 00:11:11.022 "supported_io_types": { 00:11:11.022 "read": true, 00:11:11.022 "write": true, 00:11:11.022 "unmap": true, 00:11:11.022 "flush": true, 00:11:11.022 "reset": true, 00:11:11.022 "nvme_admin": false, 00:11:11.022 "nvme_io": false, 00:11:11.022 "nvme_io_md": false, 00:11:11.022 "write_zeroes": true, 00:11:11.022 "zcopy": true, 00:11:11.022 "get_zone_info": false, 00:11:11.022 "zone_management": false, 00:11:11.022 "zone_append": false, 00:11:11.022 "compare": false, 00:11:11.022 "compare_and_write": false, 00:11:11.022 "abort": true, 00:11:11.022 "seek_hole": false, 00:11:11.022 "seek_data": false, 00:11:11.022 "copy": true, 00:11:11.022 "nvme_iov_md": false 00:11:11.022 }, 00:11:11.022 "memory_domains": [ 00:11:11.022 { 00:11:11.022 "dma_device_id": "system", 00:11:11.022 "dma_device_type": 1 00:11:11.022 }, 00:11:11.022 { 00:11:11.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.022 "dma_device_type": 2 00:11:11.022 } 00:11:11.022 ], 00:11:11.022 "driver_specific": {} 00:11:11.022 } 00:11:11.022 ] 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.022 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.023 "name": "Existed_Raid", 00:11:11.023 "uuid": "6fc00eba-20f6-4b28-8bf7-3f110fe22e4e", 00:11:11.023 "strip_size_kb": 64, 00:11:11.023 "state": "configuring", 00:11:11.023 "raid_level": "concat", 00:11:11.023 "superblock": true, 00:11:11.023 "num_base_bdevs": 4, 00:11:11.023 "num_base_bdevs_discovered": 2, 00:11:11.023 "num_base_bdevs_operational": 4, 00:11:11.023 "base_bdevs_list": [ 00:11:11.023 { 00:11:11.023 "name": "BaseBdev1", 00:11:11.023 "uuid": "7c10c1d1-6917-46e9-b586-388c73949bee", 00:11:11.023 "is_configured": true, 00:11:11.023 "data_offset": 2048, 00:11:11.023 "data_size": 63488 00:11:11.023 }, 00:11:11.023 { 00:11:11.023 "name": "BaseBdev2", 00:11:11.023 "uuid": "0e0c1153-23b7-4e8a-b7cd-85d0489a99e6", 00:11:11.023 "is_configured": true, 00:11:11.023 "data_offset": 2048, 00:11:11.023 "data_size": 63488 00:11:11.023 }, 00:11:11.023 { 00:11:11.023 "name": "BaseBdev3", 00:11:11.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.023 "is_configured": false, 00:11:11.023 "data_offset": 0, 00:11:11.023 "data_size": 0 00:11:11.023 }, 00:11:11.023 { 00:11:11.023 "name": "BaseBdev4", 00:11:11.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.023 "is_configured": false, 00:11:11.023 "data_offset": 0, 00:11:11.023 "data_size": 0 00:11:11.023 } 00:11:11.023 ] 00:11:11.023 }' 00:11:11.023 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.023 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.593 [2024-10-05 08:47:47.822068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.593 BaseBdev3 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.593 [ 00:11:11.593 { 00:11:11.593 "name": "BaseBdev3", 00:11:11.593 "aliases": [ 00:11:11.593 "60c59df2-e1b7-44eb-bf2d-f0665f949b4b" 00:11:11.593 ], 00:11:11.593 "product_name": "Malloc disk", 00:11:11.593 "block_size": 512, 00:11:11.593 "num_blocks": 65536, 00:11:11.593 "uuid": "60c59df2-e1b7-44eb-bf2d-f0665f949b4b", 00:11:11.593 "assigned_rate_limits": { 00:11:11.593 "rw_ios_per_sec": 0, 00:11:11.593 "rw_mbytes_per_sec": 0, 00:11:11.593 "r_mbytes_per_sec": 0, 00:11:11.593 "w_mbytes_per_sec": 0 00:11:11.593 }, 00:11:11.593 "claimed": true, 00:11:11.593 "claim_type": "exclusive_write", 00:11:11.593 "zoned": false, 00:11:11.593 "supported_io_types": { 00:11:11.593 "read": true, 00:11:11.593 "write": true, 00:11:11.593 "unmap": true, 00:11:11.593 "flush": true, 00:11:11.593 "reset": true, 00:11:11.593 "nvme_admin": false, 00:11:11.593 "nvme_io": false, 00:11:11.593 "nvme_io_md": false, 00:11:11.593 "write_zeroes": true, 00:11:11.593 "zcopy": true, 00:11:11.593 "get_zone_info": false, 00:11:11.593 "zone_management": false, 00:11:11.593 "zone_append": false, 00:11:11.593 "compare": false, 00:11:11.593 "compare_and_write": false, 00:11:11.593 "abort": true, 00:11:11.593 "seek_hole": false, 00:11:11.593 "seek_data": false, 00:11:11.593 "copy": true, 00:11:11.593 "nvme_iov_md": false 00:11:11.593 }, 00:11:11.593 "memory_domains": [ 00:11:11.593 { 00:11:11.593 "dma_device_id": "system", 00:11:11.593 "dma_device_type": 1 00:11:11.593 }, 00:11:11.593 { 00:11:11.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.593 "dma_device_type": 2 00:11:11.593 } 00:11:11.593 ], 00:11:11.593 "driver_specific": {} 00:11:11.593 } 00:11:11.593 ] 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.593 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.593 "name": "Existed_Raid", 00:11:11.593 "uuid": "6fc00eba-20f6-4b28-8bf7-3f110fe22e4e", 00:11:11.594 "strip_size_kb": 64, 00:11:11.594 "state": "configuring", 00:11:11.594 "raid_level": "concat", 00:11:11.594 "superblock": true, 00:11:11.594 "num_base_bdevs": 4, 00:11:11.594 "num_base_bdevs_discovered": 3, 00:11:11.594 "num_base_bdevs_operational": 4, 00:11:11.594 "base_bdevs_list": [ 00:11:11.594 { 00:11:11.594 "name": "BaseBdev1", 00:11:11.594 "uuid": "7c10c1d1-6917-46e9-b586-388c73949bee", 00:11:11.594 "is_configured": true, 00:11:11.594 "data_offset": 2048, 00:11:11.594 "data_size": 63488 00:11:11.594 }, 00:11:11.594 { 00:11:11.594 "name": "BaseBdev2", 00:11:11.594 "uuid": "0e0c1153-23b7-4e8a-b7cd-85d0489a99e6", 00:11:11.594 "is_configured": true, 00:11:11.594 "data_offset": 2048, 00:11:11.594 "data_size": 63488 00:11:11.594 }, 00:11:11.594 { 00:11:11.594 "name": "BaseBdev3", 00:11:11.594 "uuid": "60c59df2-e1b7-44eb-bf2d-f0665f949b4b", 00:11:11.594 "is_configured": true, 00:11:11.594 "data_offset": 2048, 00:11:11.594 "data_size": 63488 00:11:11.594 }, 00:11:11.594 { 00:11:11.594 "name": "BaseBdev4", 00:11:11.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.594 "is_configured": false, 00:11:11.594 "data_offset": 0, 00:11:11.594 "data_size": 0 00:11:11.594 } 00:11:11.594 ] 00:11:11.594 }' 00:11:11.594 08:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.594 08:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.854 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:11.854 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.854 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.114 [2024-10-05 08:47:48.361293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:12.114 [2024-10-05 08:47:48.361601] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:12.114 [2024-10-05 08:47:48.361626] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:12.114 [2024-10-05 08:47:48.361940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:12.114 [2024-10-05 08:47:48.362127] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:12.114 [2024-10-05 08:47:48.362152] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:12.114 BaseBdev4 00:11:12.114 [2024-10-05 08:47:48.362301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.114 [ 00:11:12.114 { 00:11:12.114 "name": "BaseBdev4", 00:11:12.114 "aliases": [ 00:11:12.114 "3d569007-448b-446f-a7d6-e9ed90e3b494" 00:11:12.114 ], 00:11:12.114 "product_name": "Malloc disk", 00:11:12.114 "block_size": 512, 00:11:12.114 "num_blocks": 65536, 00:11:12.114 "uuid": "3d569007-448b-446f-a7d6-e9ed90e3b494", 00:11:12.114 "assigned_rate_limits": { 00:11:12.114 "rw_ios_per_sec": 0, 00:11:12.114 "rw_mbytes_per_sec": 0, 00:11:12.114 "r_mbytes_per_sec": 0, 00:11:12.114 "w_mbytes_per_sec": 0 00:11:12.114 }, 00:11:12.114 "claimed": true, 00:11:12.114 "claim_type": "exclusive_write", 00:11:12.114 "zoned": false, 00:11:12.114 "supported_io_types": { 00:11:12.114 "read": true, 00:11:12.114 "write": true, 00:11:12.114 "unmap": true, 00:11:12.114 "flush": true, 00:11:12.114 "reset": true, 00:11:12.114 "nvme_admin": false, 00:11:12.114 "nvme_io": false, 00:11:12.114 "nvme_io_md": false, 00:11:12.114 "write_zeroes": true, 00:11:12.114 "zcopy": true, 00:11:12.114 "get_zone_info": false, 00:11:12.114 "zone_management": false, 00:11:12.114 "zone_append": false, 00:11:12.114 "compare": false, 00:11:12.114 "compare_and_write": false, 00:11:12.114 "abort": true, 00:11:12.114 "seek_hole": false, 00:11:12.114 "seek_data": false, 00:11:12.114 "copy": true, 00:11:12.114 "nvme_iov_md": false 00:11:12.114 }, 00:11:12.114 "memory_domains": [ 00:11:12.114 { 00:11:12.114 "dma_device_id": "system", 00:11:12.114 "dma_device_type": 1 00:11:12.114 }, 00:11:12.114 { 00:11:12.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.114 "dma_device_type": 2 00:11:12.114 } 00:11:12.114 ], 00:11:12.114 "driver_specific": {} 00:11:12.114 } 00:11:12.114 ] 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.114 "name": "Existed_Raid", 00:11:12.114 "uuid": "6fc00eba-20f6-4b28-8bf7-3f110fe22e4e", 00:11:12.114 "strip_size_kb": 64, 00:11:12.114 "state": "online", 00:11:12.114 "raid_level": "concat", 00:11:12.114 "superblock": true, 00:11:12.114 "num_base_bdevs": 4, 00:11:12.114 "num_base_bdevs_discovered": 4, 00:11:12.114 "num_base_bdevs_operational": 4, 00:11:12.114 "base_bdevs_list": [ 00:11:12.114 { 00:11:12.114 "name": "BaseBdev1", 00:11:12.114 "uuid": "7c10c1d1-6917-46e9-b586-388c73949bee", 00:11:12.114 "is_configured": true, 00:11:12.114 "data_offset": 2048, 00:11:12.114 "data_size": 63488 00:11:12.114 }, 00:11:12.114 { 00:11:12.114 "name": "BaseBdev2", 00:11:12.114 "uuid": "0e0c1153-23b7-4e8a-b7cd-85d0489a99e6", 00:11:12.114 "is_configured": true, 00:11:12.114 "data_offset": 2048, 00:11:12.114 "data_size": 63488 00:11:12.114 }, 00:11:12.114 { 00:11:12.114 "name": "BaseBdev3", 00:11:12.114 "uuid": "60c59df2-e1b7-44eb-bf2d-f0665f949b4b", 00:11:12.114 "is_configured": true, 00:11:12.114 "data_offset": 2048, 00:11:12.114 "data_size": 63488 00:11:12.114 }, 00:11:12.114 { 00:11:12.114 "name": "BaseBdev4", 00:11:12.114 "uuid": "3d569007-448b-446f-a7d6-e9ed90e3b494", 00:11:12.114 "is_configured": true, 00:11:12.114 "data_offset": 2048, 00:11:12.114 "data_size": 63488 00:11:12.114 } 00:11:12.114 ] 00:11:12.114 }' 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.114 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.684 [2024-10-05 08:47:48.864801] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:12.684 "name": "Existed_Raid", 00:11:12.684 "aliases": [ 00:11:12.684 "6fc00eba-20f6-4b28-8bf7-3f110fe22e4e" 00:11:12.684 ], 00:11:12.684 "product_name": "Raid Volume", 00:11:12.684 "block_size": 512, 00:11:12.684 "num_blocks": 253952, 00:11:12.684 "uuid": "6fc00eba-20f6-4b28-8bf7-3f110fe22e4e", 00:11:12.684 "assigned_rate_limits": { 00:11:12.684 "rw_ios_per_sec": 0, 00:11:12.684 "rw_mbytes_per_sec": 0, 00:11:12.684 "r_mbytes_per_sec": 0, 00:11:12.684 "w_mbytes_per_sec": 0 00:11:12.684 }, 00:11:12.684 "claimed": false, 00:11:12.684 "zoned": false, 00:11:12.684 "supported_io_types": { 00:11:12.684 "read": true, 00:11:12.684 "write": true, 00:11:12.684 "unmap": true, 00:11:12.684 "flush": true, 00:11:12.684 "reset": true, 00:11:12.684 "nvme_admin": false, 00:11:12.684 "nvme_io": false, 00:11:12.684 "nvme_io_md": false, 00:11:12.684 "write_zeroes": true, 00:11:12.684 "zcopy": false, 00:11:12.684 "get_zone_info": false, 00:11:12.684 "zone_management": false, 00:11:12.684 "zone_append": false, 00:11:12.684 "compare": false, 00:11:12.684 "compare_and_write": false, 00:11:12.684 "abort": false, 00:11:12.684 "seek_hole": false, 00:11:12.684 "seek_data": false, 00:11:12.684 "copy": false, 00:11:12.684 "nvme_iov_md": false 00:11:12.684 }, 00:11:12.684 "memory_domains": [ 00:11:12.684 { 00:11:12.684 "dma_device_id": "system", 00:11:12.684 "dma_device_type": 1 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.684 "dma_device_type": 2 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "dma_device_id": "system", 00:11:12.684 "dma_device_type": 1 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.684 "dma_device_type": 2 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "dma_device_id": "system", 00:11:12.684 "dma_device_type": 1 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.684 "dma_device_type": 2 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "dma_device_id": "system", 00:11:12.684 "dma_device_type": 1 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.684 "dma_device_type": 2 00:11:12.684 } 00:11:12.684 ], 00:11:12.684 "driver_specific": { 00:11:12.684 "raid": { 00:11:12.684 "uuid": "6fc00eba-20f6-4b28-8bf7-3f110fe22e4e", 00:11:12.684 "strip_size_kb": 64, 00:11:12.684 "state": "online", 00:11:12.684 "raid_level": "concat", 00:11:12.684 "superblock": true, 00:11:12.684 "num_base_bdevs": 4, 00:11:12.684 "num_base_bdevs_discovered": 4, 00:11:12.684 "num_base_bdevs_operational": 4, 00:11:12.684 "base_bdevs_list": [ 00:11:12.684 { 00:11:12.684 "name": "BaseBdev1", 00:11:12.684 "uuid": "7c10c1d1-6917-46e9-b586-388c73949bee", 00:11:12.684 "is_configured": true, 00:11:12.684 "data_offset": 2048, 00:11:12.684 "data_size": 63488 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "name": "BaseBdev2", 00:11:12.684 "uuid": "0e0c1153-23b7-4e8a-b7cd-85d0489a99e6", 00:11:12.684 "is_configured": true, 00:11:12.684 "data_offset": 2048, 00:11:12.684 "data_size": 63488 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "name": "BaseBdev3", 00:11:12.684 "uuid": "60c59df2-e1b7-44eb-bf2d-f0665f949b4b", 00:11:12.684 "is_configured": true, 00:11:12.684 "data_offset": 2048, 00:11:12.684 "data_size": 63488 00:11:12.684 }, 00:11:12.684 { 00:11:12.684 "name": "BaseBdev4", 00:11:12.684 "uuid": "3d569007-448b-446f-a7d6-e9ed90e3b494", 00:11:12.684 "is_configured": true, 00:11:12.684 "data_offset": 2048, 00:11:12.684 "data_size": 63488 00:11:12.684 } 00:11:12.684 ] 00:11:12.684 } 00:11:12.684 } 00:11:12.684 }' 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:12.684 BaseBdev2 00:11:12.684 BaseBdev3 00:11:12.684 BaseBdev4' 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.684 08:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.684 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.684 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.684 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.684 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.685 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.685 [2024-10-05 08:47:49.140082] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.685 [2024-10-05 08:47:49.140115] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:12.685 [2024-10-05 08:47:49.140165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.945 "name": "Existed_Raid", 00:11:12.945 "uuid": "6fc00eba-20f6-4b28-8bf7-3f110fe22e4e", 00:11:12.945 "strip_size_kb": 64, 00:11:12.945 "state": "offline", 00:11:12.945 "raid_level": "concat", 00:11:12.945 "superblock": true, 00:11:12.945 "num_base_bdevs": 4, 00:11:12.945 "num_base_bdevs_discovered": 3, 00:11:12.945 "num_base_bdevs_operational": 3, 00:11:12.945 "base_bdevs_list": [ 00:11:12.945 { 00:11:12.945 "name": null, 00:11:12.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.945 "is_configured": false, 00:11:12.945 "data_offset": 0, 00:11:12.945 "data_size": 63488 00:11:12.945 }, 00:11:12.945 { 00:11:12.945 "name": "BaseBdev2", 00:11:12.945 "uuid": "0e0c1153-23b7-4e8a-b7cd-85d0489a99e6", 00:11:12.945 "is_configured": true, 00:11:12.945 "data_offset": 2048, 00:11:12.945 "data_size": 63488 00:11:12.945 }, 00:11:12.945 { 00:11:12.945 "name": "BaseBdev3", 00:11:12.945 "uuid": "60c59df2-e1b7-44eb-bf2d-f0665f949b4b", 00:11:12.945 "is_configured": true, 00:11:12.945 "data_offset": 2048, 00:11:12.945 "data_size": 63488 00:11:12.945 }, 00:11:12.945 { 00:11:12.945 "name": "BaseBdev4", 00:11:12.945 "uuid": "3d569007-448b-446f-a7d6-e9ed90e3b494", 00:11:12.945 "is_configured": true, 00:11:12.945 "data_offset": 2048, 00:11:12.945 "data_size": 63488 00:11:12.945 } 00:11:12.945 ] 00:11:12.945 }' 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.945 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.205 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:13.205 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.465 [2024-10-05 08:47:49.705137] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.465 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.465 [2024-10-05 08:47:49.866524] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:13.725 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.725 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:13.725 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:13.725 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.725 08:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:13.725 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.725 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.725 08:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.725 [2024-10-05 08:47:50.024283] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:13.725 [2024-10-05 08:47:50.024342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.725 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.990 BaseBdev2 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.990 [ 00:11:13.990 { 00:11:13.990 "name": "BaseBdev2", 00:11:13.990 "aliases": [ 00:11:13.990 "2f871604-13ec-4859-9700-3592026ed9f0" 00:11:13.990 ], 00:11:13.990 "product_name": "Malloc disk", 00:11:13.990 "block_size": 512, 00:11:13.990 "num_blocks": 65536, 00:11:13.990 "uuid": "2f871604-13ec-4859-9700-3592026ed9f0", 00:11:13.990 "assigned_rate_limits": { 00:11:13.990 "rw_ios_per_sec": 0, 00:11:13.990 "rw_mbytes_per_sec": 0, 00:11:13.990 "r_mbytes_per_sec": 0, 00:11:13.990 "w_mbytes_per_sec": 0 00:11:13.990 }, 00:11:13.990 "claimed": false, 00:11:13.990 "zoned": false, 00:11:13.990 "supported_io_types": { 00:11:13.990 "read": true, 00:11:13.990 "write": true, 00:11:13.990 "unmap": true, 00:11:13.990 "flush": true, 00:11:13.990 "reset": true, 00:11:13.990 "nvme_admin": false, 00:11:13.990 "nvme_io": false, 00:11:13.990 "nvme_io_md": false, 00:11:13.990 "write_zeroes": true, 00:11:13.990 "zcopy": true, 00:11:13.990 "get_zone_info": false, 00:11:13.990 "zone_management": false, 00:11:13.990 "zone_append": false, 00:11:13.990 "compare": false, 00:11:13.990 "compare_and_write": false, 00:11:13.990 "abort": true, 00:11:13.990 "seek_hole": false, 00:11:13.990 "seek_data": false, 00:11:13.990 "copy": true, 00:11:13.990 "nvme_iov_md": false 00:11:13.990 }, 00:11:13.990 "memory_domains": [ 00:11:13.990 { 00:11:13.990 "dma_device_id": "system", 00:11:13.990 "dma_device_type": 1 00:11:13.990 }, 00:11:13.990 { 00:11:13.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.990 "dma_device_type": 2 00:11:13.990 } 00:11:13.990 ], 00:11:13.990 "driver_specific": {} 00:11:13.990 } 00:11:13.990 ] 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.990 BaseBdev3 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.990 [ 00:11:13.990 { 00:11:13.990 "name": "BaseBdev3", 00:11:13.990 "aliases": [ 00:11:13.990 "6ca9a45f-bddd-4f65-9401-8a74ea364bde" 00:11:13.990 ], 00:11:13.990 "product_name": "Malloc disk", 00:11:13.990 "block_size": 512, 00:11:13.990 "num_blocks": 65536, 00:11:13.990 "uuid": "6ca9a45f-bddd-4f65-9401-8a74ea364bde", 00:11:13.990 "assigned_rate_limits": { 00:11:13.990 "rw_ios_per_sec": 0, 00:11:13.990 "rw_mbytes_per_sec": 0, 00:11:13.990 "r_mbytes_per_sec": 0, 00:11:13.990 "w_mbytes_per_sec": 0 00:11:13.990 }, 00:11:13.990 "claimed": false, 00:11:13.990 "zoned": false, 00:11:13.990 "supported_io_types": { 00:11:13.990 "read": true, 00:11:13.990 "write": true, 00:11:13.990 "unmap": true, 00:11:13.990 "flush": true, 00:11:13.990 "reset": true, 00:11:13.990 "nvme_admin": false, 00:11:13.990 "nvme_io": false, 00:11:13.990 "nvme_io_md": false, 00:11:13.990 "write_zeroes": true, 00:11:13.990 "zcopy": true, 00:11:13.990 "get_zone_info": false, 00:11:13.990 "zone_management": false, 00:11:13.990 "zone_append": false, 00:11:13.990 "compare": false, 00:11:13.990 "compare_and_write": false, 00:11:13.990 "abort": true, 00:11:13.990 "seek_hole": false, 00:11:13.990 "seek_data": false, 00:11:13.990 "copy": true, 00:11:13.990 "nvme_iov_md": false 00:11:13.990 }, 00:11:13.990 "memory_domains": [ 00:11:13.990 { 00:11:13.990 "dma_device_id": "system", 00:11:13.990 "dma_device_type": 1 00:11:13.990 }, 00:11:13.990 { 00:11:13.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.990 "dma_device_type": 2 00:11:13.990 } 00:11:13.990 ], 00:11:13.990 "driver_specific": {} 00:11:13.990 } 00:11:13.990 ] 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.990 BaseBdev4 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:13.990 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.991 [ 00:11:13.991 { 00:11:13.991 "name": "BaseBdev4", 00:11:13.991 "aliases": [ 00:11:13.991 "2c5acddc-287e-46a3-9069-5ad95f1f05ab" 00:11:13.991 ], 00:11:13.991 "product_name": "Malloc disk", 00:11:13.991 "block_size": 512, 00:11:13.991 "num_blocks": 65536, 00:11:13.991 "uuid": "2c5acddc-287e-46a3-9069-5ad95f1f05ab", 00:11:13.991 "assigned_rate_limits": { 00:11:13.991 "rw_ios_per_sec": 0, 00:11:13.991 "rw_mbytes_per_sec": 0, 00:11:13.991 "r_mbytes_per_sec": 0, 00:11:13.991 "w_mbytes_per_sec": 0 00:11:13.991 }, 00:11:13.991 "claimed": false, 00:11:13.991 "zoned": false, 00:11:13.991 "supported_io_types": { 00:11:13.991 "read": true, 00:11:13.991 "write": true, 00:11:13.991 "unmap": true, 00:11:13.991 "flush": true, 00:11:13.991 "reset": true, 00:11:13.991 "nvme_admin": false, 00:11:13.991 "nvme_io": false, 00:11:13.991 "nvme_io_md": false, 00:11:13.991 "write_zeroes": true, 00:11:13.991 "zcopy": true, 00:11:13.991 "get_zone_info": false, 00:11:13.991 "zone_management": false, 00:11:13.991 "zone_append": false, 00:11:13.991 "compare": false, 00:11:13.991 "compare_and_write": false, 00:11:13.991 "abort": true, 00:11:13.991 "seek_hole": false, 00:11:13.991 "seek_data": false, 00:11:13.991 "copy": true, 00:11:13.991 "nvme_iov_md": false 00:11:13.991 }, 00:11:13.991 "memory_domains": [ 00:11:13.991 { 00:11:13.991 "dma_device_id": "system", 00:11:13.991 "dma_device_type": 1 00:11:13.991 }, 00:11:13.991 { 00:11:13.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.991 "dma_device_type": 2 00:11:13.991 } 00:11:13.991 ], 00:11:13.991 "driver_specific": {} 00:11:13.991 } 00:11:13.991 ] 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.991 [2024-10-05 08:47:50.419394] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.991 [2024-10-05 08:47:50.419448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.991 [2024-10-05 08:47:50.419471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:13.991 [2024-10-05 08:47:50.421580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.991 [2024-10-05 08:47:50.421654] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.991 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.250 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.250 "name": "Existed_Raid", 00:11:14.250 "uuid": "c3ea5571-be6c-4c27-bb15-8307bdf9aa0f", 00:11:14.250 "strip_size_kb": 64, 00:11:14.250 "state": "configuring", 00:11:14.250 "raid_level": "concat", 00:11:14.250 "superblock": true, 00:11:14.250 "num_base_bdevs": 4, 00:11:14.250 "num_base_bdevs_discovered": 3, 00:11:14.250 "num_base_bdevs_operational": 4, 00:11:14.250 "base_bdevs_list": [ 00:11:14.250 { 00:11:14.250 "name": "BaseBdev1", 00:11:14.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.250 "is_configured": false, 00:11:14.250 "data_offset": 0, 00:11:14.250 "data_size": 0 00:11:14.250 }, 00:11:14.250 { 00:11:14.250 "name": "BaseBdev2", 00:11:14.250 "uuid": "2f871604-13ec-4859-9700-3592026ed9f0", 00:11:14.250 "is_configured": true, 00:11:14.250 "data_offset": 2048, 00:11:14.250 "data_size": 63488 00:11:14.250 }, 00:11:14.250 { 00:11:14.250 "name": "BaseBdev3", 00:11:14.250 "uuid": "6ca9a45f-bddd-4f65-9401-8a74ea364bde", 00:11:14.250 "is_configured": true, 00:11:14.250 "data_offset": 2048, 00:11:14.250 "data_size": 63488 00:11:14.250 }, 00:11:14.250 { 00:11:14.250 "name": "BaseBdev4", 00:11:14.250 "uuid": "2c5acddc-287e-46a3-9069-5ad95f1f05ab", 00:11:14.250 "is_configured": true, 00:11:14.250 "data_offset": 2048, 00:11:14.250 "data_size": 63488 00:11:14.250 } 00:11:14.250 ] 00:11:14.250 }' 00:11:14.250 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.250 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.510 [2024-10-05 08:47:50.882649] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.510 "name": "Existed_Raid", 00:11:14.510 "uuid": "c3ea5571-be6c-4c27-bb15-8307bdf9aa0f", 00:11:14.510 "strip_size_kb": 64, 00:11:14.510 "state": "configuring", 00:11:14.510 "raid_level": "concat", 00:11:14.510 "superblock": true, 00:11:14.510 "num_base_bdevs": 4, 00:11:14.510 "num_base_bdevs_discovered": 2, 00:11:14.510 "num_base_bdevs_operational": 4, 00:11:14.510 "base_bdevs_list": [ 00:11:14.510 { 00:11:14.510 "name": "BaseBdev1", 00:11:14.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.510 "is_configured": false, 00:11:14.510 "data_offset": 0, 00:11:14.510 "data_size": 0 00:11:14.510 }, 00:11:14.510 { 00:11:14.510 "name": null, 00:11:14.510 "uuid": "2f871604-13ec-4859-9700-3592026ed9f0", 00:11:14.510 "is_configured": false, 00:11:14.510 "data_offset": 0, 00:11:14.510 "data_size": 63488 00:11:14.510 }, 00:11:14.510 { 00:11:14.510 "name": "BaseBdev3", 00:11:14.510 "uuid": "6ca9a45f-bddd-4f65-9401-8a74ea364bde", 00:11:14.510 "is_configured": true, 00:11:14.510 "data_offset": 2048, 00:11:14.510 "data_size": 63488 00:11:14.510 }, 00:11:14.510 { 00:11:14.510 "name": "BaseBdev4", 00:11:14.510 "uuid": "2c5acddc-287e-46a3-9069-5ad95f1f05ab", 00:11:14.510 "is_configured": true, 00:11:14.510 "data_offset": 2048, 00:11:14.510 "data_size": 63488 00:11:14.510 } 00:11:14.510 ] 00:11:14.510 }' 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.510 08:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.118 [2024-10-05 08:47:51.419905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.118 BaseBdev1 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.118 [ 00:11:15.118 { 00:11:15.118 "name": "BaseBdev1", 00:11:15.118 "aliases": [ 00:11:15.118 "a6b901d0-7d10-4cd5-b4b7-4c6f0c117db3" 00:11:15.118 ], 00:11:15.118 "product_name": "Malloc disk", 00:11:15.118 "block_size": 512, 00:11:15.118 "num_blocks": 65536, 00:11:15.118 "uuid": "a6b901d0-7d10-4cd5-b4b7-4c6f0c117db3", 00:11:15.118 "assigned_rate_limits": { 00:11:15.118 "rw_ios_per_sec": 0, 00:11:15.118 "rw_mbytes_per_sec": 0, 00:11:15.118 "r_mbytes_per_sec": 0, 00:11:15.118 "w_mbytes_per_sec": 0 00:11:15.118 }, 00:11:15.118 "claimed": true, 00:11:15.118 "claim_type": "exclusive_write", 00:11:15.118 "zoned": false, 00:11:15.118 "supported_io_types": { 00:11:15.118 "read": true, 00:11:15.118 "write": true, 00:11:15.118 "unmap": true, 00:11:15.118 "flush": true, 00:11:15.118 "reset": true, 00:11:15.118 "nvme_admin": false, 00:11:15.118 "nvme_io": false, 00:11:15.118 "nvme_io_md": false, 00:11:15.118 "write_zeroes": true, 00:11:15.118 "zcopy": true, 00:11:15.118 "get_zone_info": false, 00:11:15.118 "zone_management": false, 00:11:15.118 "zone_append": false, 00:11:15.118 "compare": false, 00:11:15.118 "compare_and_write": false, 00:11:15.118 "abort": true, 00:11:15.118 "seek_hole": false, 00:11:15.118 "seek_data": false, 00:11:15.118 "copy": true, 00:11:15.118 "nvme_iov_md": false 00:11:15.118 }, 00:11:15.118 "memory_domains": [ 00:11:15.118 { 00:11:15.118 "dma_device_id": "system", 00:11:15.118 "dma_device_type": 1 00:11:15.118 }, 00:11:15.118 { 00:11:15.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.118 "dma_device_type": 2 00:11:15.118 } 00:11:15.118 ], 00:11:15.118 "driver_specific": {} 00:11:15.118 } 00:11:15.118 ] 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.118 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.118 "name": "Existed_Raid", 00:11:15.118 "uuid": "c3ea5571-be6c-4c27-bb15-8307bdf9aa0f", 00:11:15.118 "strip_size_kb": 64, 00:11:15.118 "state": "configuring", 00:11:15.118 "raid_level": "concat", 00:11:15.118 "superblock": true, 00:11:15.118 "num_base_bdevs": 4, 00:11:15.118 "num_base_bdevs_discovered": 3, 00:11:15.118 "num_base_bdevs_operational": 4, 00:11:15.118 "base_bdevs_list": [ 00:11:15.118 { 00:11:15.118 "name": "BaseBdev1", 00:11:15.118 "uuid": "a6b901d0-7d10-4cd5-b4b7-4c6f0c117db3", 00:11:15.118 "is_configured": true, 00:11:15.118 "data_offset": 2048, 00:11:15.118 "data_size": 63488 00:11:15.118 }, 00:11:15.118 { 00:11:15.118 "name": null, 00:11:15.118 "uuid": "2f871604-13ec-4859-9700-3592026ed9f0", 00:11:15.118 "is_configured": false, 00:11:15.118 "data_offset": 0, 00:11:15.118 "data_size": 63488 00:11:15.118 }, 00:11:15.118 { 00:11:15.118 "name": "BaseBdev3", 00:11:15.118 "uuid": "6ca9a45f-bddd-4f65-9401-8a74ea364bde", 00:11:15.118 "is_configured": true, 00:11:15.118 "data_offset": 2048, 00:11:15.118 "data_size": 63488 00:11:15.118 }, 00:11:15.118 { 00:11:15.118 "name": "BaseBdev4", 00:11:15.118 "uuid": "2c5acddc-287e-46a3-9069-5ad95f1f05ab", 00:11:15.118 "is_configured": true, 00:11:15.118 "data_offset": 2048, 00:11:15.119 "data_size": 63488 00:11:15.119 } 00:11:15.119 ] 00:11:15.119 }' 00:11:15.119 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.119 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.689 [2024-10-05 08:47:51.963079] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.689 08:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.689 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.689 "name": "Existed_Raid", 00:11:15.689 "uuid": "c3ea5571-be6c-4c27-bb15-8307bdf9aa0f", 00:11:15.689 "strip_size_kb": 64, 00:11:15.689 "state": "configuring", 00:11:15.689 "raid_level": "concat", 00:11:15.689 "superblock": true, 00:11:15.689 "num_base_bdevs": 4, 00:11:15.689 "num_base_bdevs_discovered": 2, 00:11:15.689 "num_base_bdevs_operational": 4, 00:11:15.689 "base_bdevs_list": [ 00:11:15.689 { 00:11:15.689 "name": "BaseBdev1", 00:11:15.689 "uuid": "a6b901d0-7d10-4cd5-b4b7-4c6f0c117db3", 00:11:15.689 "is_configured": true, 00:11:15.689 "data_offset": 2048, 00:11:15.689 "data_size": 63488 00:11:15.689 }, 00:11:15.689 { 00:11:15.689 "name": null, 00:11:15.690 "uuid": "2f871604-13ec-4859-9700-3592026ed9f0", 00:11:15.690 "is_configured": false, 00:11:15.690 "data_offset": 0, 00:11:15.690 "data_size": 63488 00:11:15.690 }, 00:11:15.690 { 00:11:15.690 "name": null, 00:11:15.690 "uuid": "6ca9a45f-bddd-4f65-9401-8a74ea364bde", 00:11:15.690 "is_configured": false, 00:11:15.690 "data_offset": 0, 00:11:15.690 "data_size": 63488 00:11:15.690 }, 00:11:15.690 { 00:11:15.690 "name": "BaseBdev4", 00:11:15.690 "uuid": "2c5acddc-287e-46a3-9069-5ad95f1f05ab", 00:11:15.690 "is_configured": true, 00:11:15.690 "data_offset": 2048, 00:11:15.690 "data_size": 63488 00:11:15.690 } 00:11:15.690 ] 00:11:15.690 }' 00:11:15.690 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.690 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.949 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.949 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:15.949 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.949 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.209 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.209 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:16.209 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:16.209 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.209 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.209 [2024-10-05 08:47:52.458243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.210 "name": "Existed_Raid", 00:11:16.210 "uuid": "c3ea5571-be6c-4c27-bb15-8307bdf9aa0f", 00:11:16.210 "strip_size_kb": 64, 00:11:16.210 "state": "configuring", 00:11:16.210 "raid_level": "concat", 00:11:16.210 "superblock": true, 00:11:16.210 "num_base_bdevs": 4, 00:11:16.210 "num_base_bdevs_discovered": 3, 00:11:16.210 "num_base_bdevs_operational": 4, 00:11:16.210 "base_bdevs_list": [ 00:11:16.210 { 00:11:16.210 "name": "BaseBdev1", 00:11:16.210 "uuid": "a6b901d0-7d10-4cd5-b4b7-4c6f0c117db3", 00:11:16.210 "is_configured": true, 00:11:16.210 "data_offset": 2048, 00:11:16.210 "data_size": 63488 00:11:16.210 }, 00:11:16.210 { 00:11:16.210 "name": null, 00:11:16.210 "uuid": "2f871604-13ec-4859-9700-3592026ed9f0", 00:11:16.210 "is_configured": false, 00:11:16.210 "data_offset": 0, 00:11:16.210 "data_size": 63488 00:11:16.210 }, 00:11:16.210 { 00:11:16.210 "name": "BaseBdev3", 00:11:16.210 "uuid": "6ca9a45f-bddd-4f65-9401-8a74ea364bde", 00:11:16.210 "is_configured": true, 00:11:16.210 "data_offset": 2048, 00:11:16.210 "data_size": 63488 00:11:16.210 }, 00:11:16.210 { 00:11:16.210 "name": "BaseBdev4", 00:11:16.210 "uuid": "2c5acddc-287e-46a3-9069-5ad95f1f05ab", 00:11:16.210 "is_configured": true, 00:11:16.210 "data_offset": 2048, 00:11:16.210 "data_size": 63488 00:11:16.210 } 00:11:16.210 ] 00:11:16.210 }' 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.210 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.469 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.469 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.469 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.469 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:16.469 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.729 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:16.729 08:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.729 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.729 08:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.729 [2024-10-05 08:47:52.953434] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.729 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.729 "name": "Existed_Raid", 00:11:16.729 "uuid": "c3ea5571-be6c-4c27-bb15-8307bdf9aa0f", 00:11:16.729 "strip_size_kb": 64, 00:11:16.729 "state": "configuring", 00:11:16.729 "raid_level": "concat", 00:11:16.729 "superblock": true, 00:11:16.729 "num_base_bdevs": 4, 00:11:16.729 "num_base_bdevs_discovered": 2, 00:11:16.729 "num_base_bdevs_operational": 4, 00:11:16.729 "base_bdevs_list": [ 00:11:16.729 { 00:11:16.729 "name": null, 00:11:16.729 "uuid": "a6b901d0-7d10-4cd5-b4b7-4c6f0c117db3", 00:11:16.729 "is_configured": false, 00:11:16.729 "data_offset": 0, 00:11:16.729 "data_size": 63488 00:11:16.729 }, 00:11:16.729 { 00:11:16.729 "name": null, 00:11:16.729 "uuid": "2f871604-13ec-4859-9700-3592026ed9f0", 00:11:16.729 "is_configured": false, 00:11:16.729 "data_offset": 0, 00:11:16.729 "data_size": 63488 00:11:16.729 }, 00:11:16.729 { 00:11:16.729 "name": "BaseBdev3", 00:11:16.729 "uuid": "6ca9a45f-bddd-4f65-9401-8a74ea364bde", 00:11:16.729 "is_configured": true, 00:11:16.729 "data_offset": 2048, 00:11:16.729 "data_size": 63488 00:11:16.729 }, 00:11:16.729 { 00:11:16.729 "name": "BaseBdev4", 00:11:16.729 "uuid": "2c5acddc-287e-46a3-9069-5ad95f1f05ab", 00:11:16.729 "is_configured": true, 00:11:16.729 "data_offset": 2048, 00:11:16.729 "data_size": 63488 00:11:16.729 } 00:11:16.729 ] 00:11:16.729 }' 00:11:16.730 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.730 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.990 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:16.990 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.990 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.990 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.250 [2024-10-05 08:47:53.469026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.250 "name": "Existed_Raid", 00:11:17.250 "uuid": "c3ea5571-be6c-4c27-bb15-8307bdf9aa0f", 00:11:17.250 "strip_size_kb": 64, 00:11:17.250 "state": "configuring", 00:11:17.250 "raid_level": "concat", 00:11:17.250 "superblock": true, 00:11:17.250 "num_base_bdevs": 4, 00:11:17.250 "num_base_bdevs_discovered": 3, 00:11:17.250 "num_base_bdevs_operational": 4, 00:11:17.250 "base_bdevs_list": [ 00:11:17.250 { 00:11:17.250 "name": null, 00:11:17.250 "uuid": "a6b901d0-7d10-4cd5-b4b7-4c6f0c117db3", 00:11:17.250 "is_configured": false, 00:11:17.250 "data_offset": 0, 00:11:17.250 "data_size": 63488 00:11:17.250 }, 00:11:17.250 { 00:11:17.250 "name": "BaseBdev2", 00:11:17.250 "uuid": "2f871604-13ec-4859-9700-3592026ed9f0", 00:11:17.250 "is_configured": true, 00:11:17.250 "data_offset": 2048, 00:11:17.250 "data_size": 63488 00:11:17.250 }, 00:11:17.250 { 00:11:17.250 "name": "BaseBdev3", 00:11:17.250 "uuid": "6ca9a45f-bddd-4f65-9401-8a74ea364bde", 00:11:17.250 "is_configured": true, 00:11:17.250 "data_offset": 2048, 00:11:17.250 "data_size": 63488 00:11:17.250 }, 00:11:17.250 { 00:11:17.250 "name": "BaseBdev4", 00:11:17.250 "uuid": "2c5acddc-287e-46a3-9069-5ad95f1f05ab", 00:11:17.250 "is_configured": true, 00:11:17.250 "data_offset": 2048, 00:11:17.250 "data_size": 63488 00:11:17.250 } 00:11:17.250 ] 00:11:17.250 }' 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.250 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.510 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.510 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.510 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:17.510 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.510 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.510 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:17.510 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.510 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.510 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.510 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:17.510 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.770 08:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a6b901d0-7d10-4cd5-b4b7-4c6f0c117db3 00:11:17.770 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.770 08:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.770 [2024-10-05 08:47:54.026393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:17.770 [2024-10-05 08:47:54.026688] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:17.770 [2024-10-05 08:47:54.026704] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:17.770 [2024-10-05 08:47:54.027014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:17.770 [2024-10-05 08:47:54.027181] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:17.770 [2024-10-05 08:47:54.027200] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:17.770 [2024-10-05 08:47:54.027338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.770 NewBaseBdev 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.770 [ 00:11:17.770 { 00:11:17.770 "name": "NewBaseBdev", 00:11:17.770 "aliases": [ 00:11:17.770 "a6b901d0-7d10-4cd5-b4b7-4c6f0c117db3" 00:11:17.770 ], 00:11:17.770 "product_name": "Malloc disk", 00:11:17.770 "block_size": 512, 00:11:17.770 "num_blocks": 65536, 00:11:17.770 "uuid": "a6b901d0-7d10-4cd5-b4b7-4c6f0c117db3", 00:11:17.770 "assigned_rate_limits": { 00:11:17.770 "rw_ios_per_sec": 0, 00:11:17.770 "rw_mbytes_per_sec": 0, 00:11:17.770 "r_mbytes_per_sec": 0, 00:11:17.770 "w_mbytes_per_sec": 0 00:11:17.770 }, 00:11:17.770 "claimed": true, 00:11:17.770 "claim_type": "exclusive_write", 00:11:17.770 "zoned": false, 00:11:17.770 "supported_io_types": { 00:11:17.770 "read": true, 00:11:17.770 "write": true, 00:11:17.770 "unmap": true, 00:11:17.770 "flush": true, 00:11:17.770 "reset": true, 00:11:17.770 "nvme_admin": false, 00:11:17.770 "nvme_io": false, 00:11:17.770 "nvme_io_md": false, 00:11:17.770 "write_zeroes": true, 00:11:17.770 "zcopy": true, 00:11:17.770 "get_zone_info": false, 00:11:17.770 "zone_management": false, 00:11:17.770 "zone_append": false, 00:11:17.770 "compare": false, 00:11:17.770 "compare_and_write": false, 00:11:17.770 "abort": true, 00:11:17.770 "seek_hole": false, 00:11:17.770 "seek_data": false, 00:11:17.770 "copy": true, 00:11:17.770 "nvme_iov_md": false 00:11:17.770 }, 00:11:17.770 "memory_domains": [ 00:11:17.770 { 00:11:17.770 "dma_device_id": "system", 00:11:17.770 "dma_device_type": 1 00:11:17.770 }, 00:11:17.770 { 00:11:17.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.770 "dma_device_type": 2 00:11:17.770 } 00:11:17.770 ], 00:11:17.770 "driver_specific": {} 00:11:17.770 } 00:11:17.770 ] 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.770 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.770 "name": "Existed_Raid", 00:11:17.770 "uuid": "c3ea5571-be6c-4c27-bb15-8307bdf9aa0f", 00:11:17.770 "strip_size_kb": 64, 00:11:17.770 "state": "online", 00:11:17.770 "raid_level": "concat", 00:11:17.770 "superblock": true, 00:11:17.771 "num_base_bdevs": 4, 00:11:17.771 "num_base_bdevs_discovered": 4, 00:11:17.771 "num_base_bdevs_operational": 4, 00:11:17.771 "base_bdevs_list": [ 00:11:17.771 { 00:11:17.771 "name": "NewBaseBdev", 00:11:17.771 "uuid": "a6b901d0-7d10-4cd5-b4b7-4c6f0c117db3", 00:11:17.771 "is_configured": true, 00:11:17.771 "data_offset": 2048, 00:11:17.771 "data_size": 63488 00:11:17.771 }, 00:11:17.771 { 00:11:17.771 "name": "BaseBdev2", 00:11:17.771 "uuid": "2f871604-13ec-4859-9700-3592026ed9f0", 00:11:17.771 "is_configured": true, 00:11:17.771 "data_offset": 2048, 00:11:17.771 "data_size": 63488 00:11:17.771 }, 00:11:17.771 { 00:11:17.771 "name": "BaseBdev3", 00:11:17.771 "uuid": "6ca9a45f-bddd-4f65-9401-8a74ea364bde", 00:11:17.771 "is_configured": true, 00:11:17.771 "data_offset": 2048, 00:11:17.771 "data_size": 63488 00:11:17.771 }, 00:11:17.771 { 00:11:17.771 "name": "BaseBdev4", 00:11:17.771 "uuid": "2c5acddc-287e-46a3-9069-5ad95f1f05ab", 00:11:17.771 "is_configured": true, 00:11:17.771 "data_offset": 2048, 00:11:17.771 "data_size": 63488 00:11:17.771 } 00:11:17.771 ] 00:11:17.771 }' 00:11:17.771 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.771 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.031 [2024-10-05 08:47:54.442083] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.031 "name": "Existed_Raid", 00:11:18.031 "aliases": [ 00:11:18.031 "c3ea5571-be6c-4c27-bb15-8307bdf9aa0f" 00:11:18.031 ], 00:11:18.031 "product_name": "Raid Volume", 00:11:18.031 "block_size": 512, 00:11:18.031 "num_blocks": 253952, 00:11:18.031 "uuid": "c3ea5571-be6c-4c27-bb15-8307bdf9aa0f", 00:11:18.031 "assigned_rate_limits": { 00:11:18.031 "rw_ios_per_sec": 0, 00:11:18.031 "rw_mbytes_per_sec": 0, 00:11:18.031 "r_mbytes_per_sec": 0, 00:11:18.031 "w_mbytes_per_sec": 0 00:11:18.031 }, 00:11:18.031 "claimed": false, 00:11:18.031 "zoned": false, 00:11:18.031 "supported_io_types": { 00:11:18.031 "read": true, 00:11:18.031 "write": true, 00:11:18.031 "unmap": true, 00:11:18.031 "flush": true, 00:11:18.031 "reset": true, 00:11:18.031 "nvme_admin": false, 00:11:18.031 "nvme_io": false, 00:11:18.031 "nvme_io_md": false, 00:11:18.031 "write_zeroes": true, 00:11:18.031 "zcopy": false, 00:11:18.031 "get_zone_info": false, 00:11:18.031 "zone_management": false, 00:11:18.031 "zone_append": false, 00:11:18.031 "compare": false, 00:11:18.031 "compare_and_write": false, 00:11:18.031 "abort": false, 00:11:18.031 "seek_hole": false, 00:11:18.031 "seek_data": false, 00:11:18.031 "copy": false, 00:11:18.031 "nvme_iov_md": false 00:11:18.031 }, 00:11:18.031 "memory_domains": [ 00:11:18.031 { 00:11:18.031 "dma_device_id": "system", 00:11:18.031 "dma_device_type": 1 00:11:18.031 }, 00:11:18.031 { 00:11:18.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.031 "dma_device_type": 2 00:11:18.031 }, 00:11:18.031 { 00:11:18.031 "dma_device_id": "system", 00:11:18.031 "dma_device_type": 1 00:11:18.031 }, 00:11:18.031 { 00:11:18.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.031 "dma_device_type": 2 00:11:18.031 }, 00:11:18.031 { 00:11:18.031 "dma_device_id": "system", 00:11:18.031 "dma_device_type": 1 00:11:18.031 }, 00:11:18.031 { 00:11:18.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.031 "dma_device_type": 2 00:11:18.031 }, 00:11:18.031 { 00:11:18.031 "dma_device_id": "system", 00:11:18.031 "dma_device_type": 1 00:11:18.031 }, 00:11:18.031 { 00:11:18.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.031 "dma_device_type": 2 00:11:18.031 } 00:11:18.031 ], 00:11:18.031 "driver_specific": { 00:11:18.031 "raid": { 00:11:18.031 "uuid": "c3ea5571-be6c-4c27-bb15-8307bdf9aa0f", 00:11:18.031 "strip_size_kb": 64, 00:11:18.031 "state": "online", 00:11:18.031 "raid_level": "concat", 00:11:18.031 "superblock": true, 00:11:18.031 "num_base_bdevs": 4, 00:11:18.031 "num_base_bdevs_discovered": 4, 00:11:18.031 "num_base_bdevs_operational": 4, 00:11:18.031 "base_bdevs_list": [ 00:11:18.031 { 00:11:18.031 "name": "NewBaseBdev", 00:11:18.031 "uuid": "a6b901d0-7d10-4cd5-b4b7-4c6f0c117db3", 00:11:18.031 "is_configured": true, 00:11:18.031 "data_offset": 2048, 00:11:18.031 "data_size": 63488 00:11:18.031 }, 00:11:18.031 { 00:11:18.031 "name": "BaseBdev2", 00:11:18.031 "uuid": "2f871604-13ec-4859-9700-3592026ed9f0", 00:11:18.031 "is_configured": true, 00:11:18.031 "data_offset": 2048, 00:11:18.031 "data_size": 63488 00:11:18.031 }, 00:11:18.031 { 00:11:18.031 "name": "BaseBdev3", 00:11:18.031 "uuid": "6ca9a45f-bddd-4f65-9401-8a74ea364bde", 00:11:18.031 "is_configured": true, 00:11:18.031 "data_offset": 2048, 00:11:18.031 "data_size": 63488 00:11:18.031 }, 00:11:18.031 { 00:11:18.031 "name": "BaseBdev4", 00:11:18.031 "uuid": "2c5acddc-287e-46a3-9069-5ad95f1f05ab", 00:11:18.031 "is_configured": true, 00:11:18.031 "data_offset": 2048, 00:11:18.031 "data_size": 63488 00:11:18.031 } 00:11:18.031 ] 00:11:18.031 } 00:11:18.031 } 00:11:18.031 }' 00:11:18.031 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:18.292 BaseBdev2 00:11:18.292 BaseBdev3 00:11:18.292 BaseBdev4' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.292 [2024-10-05 08:47:54.717231] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.292 [2024-10-05 08:47:54.717264] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.292 [2024-10-05 08:47:54.717342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.292 [2024-10-05 08:47:54.717413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.292 [2024-10-05 08:47:54.717424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70504 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 70504 ']' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 70504 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70504 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:18.292 killing process with pid 70504 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70504' 00:11:18.292 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 70504 00:11:18.292 [2024-10-05 08:47:54.762321] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.551 08:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 70504 00:11:18.811 [2024-10-05 08:47:55.179940] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.193 08:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:20.193 00:11:20.193 real 0m11.557s 00:11:20.193 user 0m18.020s 00:11:20.193 sys 0m2.147s 00:11:20.193 08:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.193 08:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.193 ************************************ 00:11:20.193 END TEST raid_state_function_test_sb 00:11:20.193 ************************************ 00:11:20.193 08:47:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:20.193 08:47:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:20.193 08:47:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:20.193 08:47:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.193 ************************************ 00:11:20.193 START TEST raid_superblock_test 00:11:20.193 ************************************ 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71108 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71108 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71108 ']' 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:20.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:20.193 08:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.453 [2024-10-05 08:47:56.685975] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:20.453 [2024-10-05 08:47:56.686133] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71108 ] 00:11:20.453 [2024-10-05 08:47:56.855815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.712 [2024-10-05 08:47:57.101779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.972 [2024-10-05 08:47:57.335313] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.972 [2024-10-05 08:47:57.335348] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.233 malloc1 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.233 [2024-10-05 08:47:57.567198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:21.233 [2024-10-05 08:47:57.567288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.233 [2024-10-05 08:47:57.567316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:21.233 [2024-10-05 08:47:57.567330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.233 [2024-10-05 08:47:57.569664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.233 [2024-10-05 08:47:57.569700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:21.233 pt1 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.233 malloc2 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.233 [2024-10-05 08:47:57.639463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:21.233 [2024-10-05 08:47:57.639518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.233 [2024-10-05 08:47:57.639543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:21.233 [2024-10-05 08:47:57.639553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.233 [2024-10-05 08:47:57.641909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.233 [2024-10-05 08:47:57.641943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:21.233 pt2 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:21.233 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:21.234 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:21.234 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:21.234 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:21.234 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:21.234 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.234 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.234 malloc3 00:11:21.234 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.234 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:21.234 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.234 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.234 [2024-10-05 08:47:57.701284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:21.234 [2024-10-05 08:47:57.701333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.234 [2024-10-05 08:47:57.701355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:21.234 [2024-10-05 08:47:57.701364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.494 [2024-10-05 08:47:57.703666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.494 [2024-10-05 08:47:57.703700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:21.494 pt3 00:11:21.494 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.494 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:21.494 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:21.494 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:21.494 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:21.494 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:21.494 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:21.494 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:21.494 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:21.494 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.495 malloc4 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.495 [2024-10-05 08:47:57.762741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:21.495 [2024-10-05 08:47:57.762806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.495 [2024-10-05 08:47:57.762824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:21.495 [2024-10-05 08:47:57.762834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.495 [2024-10-05 08:47:57.765160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.495 [2024-10-05 08:47:57.765194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:21.495 pt4 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.495 [2024-10-05 08:47:57.774808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:21.495 [2024-10-05 08:47:57.776867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:21.495 [2024-10-05 08:47:57.776934] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:21.495 [2024-10-05 08:47:57.777006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:21.495 [2024-10-05 08:47:57.777199] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:21.495 [2024-10-05 08:47:57.777226] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:21.495 [2024-10-05 08:47:57.777476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:21.495 [2024-10-05 08:47:57.777642] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:21.495 [2024-10-05 08:47:57.777661] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:21.495 [2024-10-05 08:47:57.777805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.495 "name": "raid_bdev1", 00:11:21.495 "uuid": "bc3166e7-1e55-455e-ace4-a86bb9867274", 00:11:21.495 "strip_size_kb": 64, 00:11:21.495 "state": "online", 00:11:21.495 "raid_level": "concat", 00:11:21.495 "superblock": true, 00:11:21.495 "num_base_bdevs": 4, 00:11:21.495 "num_base_bdevs_discovered": 4, 00:11:21.495 "num_base_bdevs_operational": 4, 00:11:21.495 "base_bdevs_list": [ 00:11:21.495 { 00:11:21.495 "name": "pt1", 00:11:21.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.495 "is_configured": true, 00:11:21.495 "data_offset": 2048, 00:11:21.495 "data_size": 63488 00:11:21.495 }, 00:11:21.495 { 00:11:21.495 "name": "pt2", 00:11:21.495 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.495 "is_configured": true, 00:11:21.495 "data_offset": 2048, 00:11:21.495 "data_size": 63488 00:11:21.495 }, 00:11:21.495 { 00:11:21.495 "name": "pt3", 00:11:21.495 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.495 "is_configured": true, 00:11:21.495 "data_offset": 2048, 00:11:21.495 "data_size": 63488 00:11:21.495 }, 00:11:21.495 { 00:11:21.495 "name": "pt4", 00:11:21.495 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.495 "is_configured": true, 00:11:21.495 "data_offset": 2048, 00:11:21.495 "data_size": 63488 00:11:21.495 } 00:11:21.495 ] 00:11:21.495 }' 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.495 08:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.065 [2024-10-05 08:47:58.286237] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.065 "name": "raid_bdev1", 00:11:22.065 "aliases": [ 00:11:22.065 "bc3166e7-1e55-455e-ace4-a86bb9867274" 00:11:22.065 ], 00:11:22.065 "product_name": "Raid Volume", 00:11:22.065 "block_size": 512, 00:11:22.065 "num_blocks": 253952, 00:11:22.065 "uuid": "bc3166e7-1e55-455e-ace4-a86bb9867274", 00:11:22.065 "assigned_rate_limits": { 00:11:22.065 "rw_ios_per_sec": 0, 00:11:22.065 "rw_mbytes_per_sec": 0, 00:11:22.065 "r_mbytes_per_sec": 0, 00:11:22.065 "w_mbytes_per_sec": 0 00:11:22.065 }, 00:11:22.065 "claimed": false, 00:11:22.065 "zoned": false, 00:11:22.065 "supported_io_types": { 00:11:22.065 "read": true, 00:11:22.065 "write": true, 00:11:22.065 "unmap": true, 00:11:22.065 "flush": true, 00:11:22.065 "reset": true, 00:11:22.065 "nvme_admin": false, 00:11:22.065 "nvme_io": false, 00:11:22.065 "nvme_io_md": false, 00:11:22.065 "write_zeroes": true, 00:11:22.065 "zcopy": false, 00:11:22.065 "get_zone_info": false, 00:11:22.065 "zone_management": false, 00:11:22.065 "zone_append": false, 00:11:22.065 "compare": false, 00:11:22.065 "compare_and_write": false, 00:11:22.065 "abort": false, 00:11:22.065 "seek_hole": false, 00:11:22.065 "seek_data": false, 00:11:22.065 "copy": false, 00:11:22.065 "nvme_iov_md": false 00:11:22.065 }, 00:11:22.065 "memory_domains": [ 00:11:22.065 { 00:11:22.065 "dma_device_id": "system", 00:11:22.065 "dma_device_type": 1 00:11:22.065 }, 00:11:22.065 { 00:11:22.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.065 "dma_device_type": 2 00:11:22.065 }, 00:11:22.065 { 00:11:22.065 "dma_device_id": "system", 00:11:22.065 "dma_device_type": 1 00:11:22.065 }, 00:11:22.065 { 00:11:22.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.065 "dma_device_type": 2 00:11:22.065 }, 00:11:22.065 { 00:11:22.065 "dma_device_id": "system", 00:11:22.065 "dma_device_type": 1 00:11:22.065 }, 00:11:22.065 { 00:11:22.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.065 "dma_device_type": 2 00:11:22.065 }, 00:11:22.065 { 00:11:22.065 "dma_device_id": "system", 00:11:22.065 "dma_device_type": 1 00:11:22.065 }, 00:11:22.065 { 00:11:22.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.065 "dma_device_type": 2 00:11:22.065 } 00:11:22.065 ], 00:11:22.065 "driver_specific": { 00:11:22.065 "raid": { 00:11:22.065 "uuid": "bc3166e7-1e55-455e-ace4-a86bb9867274", 00:11:22.065 "strip_size_kb": 64, 00:11:22.065 "state": "online", 00:11:22.065 "raid_level": "concat", 00:11:22.065 "superblock": true, 00:11:22.065 "num_base_bdevs": 4, 00:11:22.065 "num_base_bdevs_discovered": 4, 00:11:22.065 "num_base_bdevs_operational": 4, 00:11:22.065 "base_bdevs_list": [ 00:11:22.065 { 00:11:22.065 "name": "pt1", 00:11:22.065 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.065 "is_configured": true, 00:11:22.065 "data_offset": 2048, 00:11:22.065 "data_size": 63488 00:11:22.065 }, 00:11:22.065 { 00:11:22.065 "name": "pt2", 00:11:22.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.065 "is_configured": true, 00:11:22.065 "data_offset": 2048, 00:11:22.065 "data_size": 63488 00:11:22.065 }, 00:11:22.065 { 00:11:22.065 "name": "pt3", 00:11:22.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.065 "is_configured": true, 00:11:22.065 "data_offset": 2048, 00:11:22.065 "data_size": 63488 00:11:22.065 }, 00:11:22.065 { 00:11:22.065 "name": "pt4", 00:11:22.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.065 "is_configured": true, 00:11:22.065 "data_offset": 2048, 00:11:22.065 "data_size": 63488 00:11:22.065 } 00:11:22.065 ] 00:11:22.065 } 00:11:22.065 } 00:11:22.065 }' 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:22.065 pt2 00:11:22.065 pt3 00:11:22.065 pt4' 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.065 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.325 [2024-10-05 08:47:58.609591] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc3166e7-1e55-455e-ace4-a86bb9867274 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bc3166e7-1e55-455e-ace4-a86bb9867274 ']' 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.325 [2024-10-05 08:47:58.657234] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.325 [2024-10-05 08:47:58.657264] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.325 [2024-10-05 08:47:58.657356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.325 [2024-10-05 08:47:58.657428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.325 [2024-10-05 08:47:58.657446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.325 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:22.326 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:22.326 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.326 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.326 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.326 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:22.326 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:22.326 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:22.326 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:22.326 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:22.326 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:22.326 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.586 [2024-10-05 08:47:58.801027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:22.586 [2024-10-05 08:47:58.803163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:22.586 [2024-10-05 08:47:58.803210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:22.586 [2024-10-05 08:47:58.803243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:22.586 [2024-10-05 08:47:58.803289] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:22.586 [2024-10-05 08:47:58.803330] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:22.586 [2024-10-05 08:47:58.803347] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:22.586 [2024-10-05 08:47:58.803364] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:22.586 [2024-10-05 08:47:58.803376] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.586 [2024-10-05 08:47:58.803387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:22.586 request: 00:11:22.586 { 00:11:22.586 "name": "raid_bdev1", 00:11:22.586 "raid_level": "concat", 00:11:22.586 "base_bdevs": [ 00:11:22.586 "malloc1", 00:11:22.586 "malloc2", 00:11:22.586 "malloc3", 00:11:22.586 "malloc4" 00:11:22.586 ], 00:11:22.586 "strip_size_kb": 64, 00:11:22.586 "superblock": false, 00:11:22.586 "method": "bdev_raid_create", 00:11:22.586 "req_id": 1 00:11:22.586 } 00:11:22.586 Got JSON-RPC error response 00:11:22.586 response: 00:11:22.586 { 00:11:22.586 "code": -17, 00:11:22.586 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:22.586 } 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.586 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.586 [2024-10-05 08:47:58.868873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:22.586 [2024-10-05 08:47:58.868921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.586 [2024-10-05 08:47:58.868938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:22.587 [2024-10-05 08:47:58.868949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.587 [2024-10-05 08:47:58.871401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.587 [2024-10-05 08:47:58.871437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:22.587 [2024-10-05 08:47:58.871523] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:22.587 [2024-10-05 08:47:58.871578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:22.587 pt1 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.587 "name": "raid_bdev1", 00:11:22.587 "uuid": "bc3166e7-1e55-455e-ace4-a86bb9867274", 00:11:22.587 "strip_size_kb": 64, 00:11:22.587 "state": "configuring", 00:11:22.587 "raid_level": "concat", 00:11:22.587 "superblock": true, 00:11:22.587 "num_base_bdevs": 4, 00:11:22.587 "num_base_bdevs_discovered": 1, 00:11:22.587 "num_base_bdevs_operational": 4, 00:11:22.587 "base_bdevs_list": [ 00:11:22.587 { 00:11:22.587 "name": "pt1", 00:11:22.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.587 "is_configured": true, 00:11:22.587 "data_offset": 2048, 00:11:22.587 "data_size": 63488 00:11:22.587 }, 00:11:22.587 { 00:11:22.587 "name": null, 00:11:22.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.587 "is_configured": false, 00:11:22.587 "data_offset": 2048, 00:11:22.587 "data_size": 63488 00:11:22.587 }, 00:11:22.587 { 00:11:22.587 "name": null, 00:11:22.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.587 "is_configured": false, 00:11:22.587 "data_offset": 2048, 00:11:22.587 "data_size": 63488 00:11:22.587 }, 00:11:22.587 { 00:11:22.587 "name": null, 00:11:22.587 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.587 "is_configured": false, 00:11:22.587 "data_offset": 2048, 00:11:22.587 "data_size": 63488 00:11:22.587 } 00:11:22.587 ] 00:11:22.587 }' 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.587 08:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.157 [2024-10-05 08:47:59.344117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:23.157 [2024-10-05 08:47:59.344199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.157 [2024-10-05 08:47:59.344222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:23.157 [2024-10-05 08:47:59.344251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.157 [2024-10-05 08:47:59.344782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.157 [2024-10-05 08:47:59.344812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:23.157 [2024-10-05 08:47:59.344911] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:23.157 [2024-10-05 08:47:59.344943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:23.157 pt2 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.157 [2024-10-05 08:47:59.356109] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.157 "name": "raid_bdev1", 00:11:23.157 "uuid": "bc3166e7-1e55-455e-ace4-a86bb9867274", 00:11:23.157 "strip_size_kb": 64, 00:11:23.157 "state": "configuring", 00:11:23.157 "raid_level": "concat", 00:11:23.157 "superblock": true, 00:11:23.157 "num_base_bdevs": 4, 00:11:23.157 "num_base_bdevs_discovered": 1, 00:11:23.157 "num_base_bdevs_operational": 4, 00:11:23.157 "base_bdevs_list": [ 00:11:23.157 { 00:11:23.157 "name": "pt1", 00:11:23.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:23.157 "is_configured": true, 00:11:23.157 "data_offset": 2048, 00:11:23.157 "data_size": 63488 00:11:23.157 }, 00:11:23.157 { 00:11:23.157 "name": null, 00:11:23.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:23.157 "is_configured": false, 00:11:23.157 "data_offset": 0, 00:11:23.157 "data_size": 63488 00:11:23.157 }, 00:11:23.157 { 00:11:23.157 "name": null, 00:11:23.157 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:23.157 "is_configured": false, 00:11:23.157 "data_offset": 2048, 00:11:23.157 "data_size": 63488 00:11:23.157 }, 00:11:23.157 { 00:11:23.157 "name": null, 00:11:23.157 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:23.157 "is_configured": false, 00:11:23.157 "data_offset": 2048, 00:11:23.157 "data_size": 63488 00:11:23.157 } 00:11:23.157 ] 00:11:23.157 }' 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.157 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.417 [2024-10-05 08:47:59.823281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:23.417 [2024-10-05 08:47:59.823370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.417 [2024-10-05 08:47:59.823394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:23.417 [2024-10-05 08:47:59.823405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.417 [2024-10-05 08:47:59.823927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.417 [2024-10-05 08:47:59.823966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:23.417 [2024-10-05 08:47:59.824072] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:23.417 [2024-10-05 08:47:59.824114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:23.417 pt2 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.417 [2024-10-05 08:47:59.835214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:23.417 [2024-10-05 08:47:59.835265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.417 [2024-10-05 08:47:59.835291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:23.417 [2024-10-05 08:47:59.835301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.417 [2024-10-05 08:47:59.835705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.417 [2024-10-05 08:47:59.835732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:23.417 [2024-10-05 08:47:59.835799] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:23.417 [2024-10-05 08:47:59.835818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:23.417 pt3 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.417 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.417 [2024-10-05 08:47:59.847162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:23.417 [2024-10-05 08:47:59.847213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.417 [2024-10-05 08:47:59.847231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:23.417 [2024-10-05 08:47:59.847239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.417 [2024-10-05 08:47:59.847619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.417 [2024-10-05 08:47:59.847643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:23.417 [2024-10-05 08:47:59.847705] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:23.417 [2024-10-05 08:47:59.847728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:23.417 [2024-10-05 08:47:59.847870] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:23.418 [2024-10-05 08:47:59.847883] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:23.418 [2024-10-05 08:47:59.848154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:23.418 [2024-10-05 08:47:59.848314] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:23.418 [2024-10-05 08:47:59.848331] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:23.418 [2024-10-05 08:47:59.848465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.418 pt4 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.418 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.677 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.677 "name": "raid_bdev1", 00:11:23.677 "uuid": "bc3166e7-1e55-455e-ace4-a86bb9867274", 00:11:23.677 "strip_size_kb": 64, 00:11:23.677 "state": "online", 00:11:23.677 "raid_level": "concat", 00:11:23.677 "superblock": true, 00:11:23.677 "num_base_bdevs": 4, 00:11:23.677 "num_base_bdevs_discovered": 4, 00:11:23.677 "num_base_bdevs_operational": 4, 00:11:23.677 "base_bdevs_list": [ 00:11:23.677 { 00:11:23.677 "name": "pt1", 00:11:23.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:23.677 "is_configured": true, 00:11:23.677 "data_offset": 2048, 00:11:23.677 "data_size": 63488 00:11:23.677 }, 00:11:23.677 { 00:11:23.677 "name": "pt2", 00:11:23.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:23.677 "is_configured": true, 00:11:23.677 "data_offset": 2048, 00:11:23.677 "data_size": 63488 00:11:23.677 }, 00:11:23.677 { 00:11:23.677 "name": "pt3", 00:11:23.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:23.677 "is_configured": true, 00:11:23.677 "data_offset": 2048, 00:11:23.677 "data_size": 63488 00:11:23.677 }, 00:11:23.677 { 00:11:23.677 "name": "pt4", 00:11:23.677 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:23.677 "is_configured": true, 00:11:23.677 "data_offset": 2048, 00:11:23.677 "data_size": 63488 00:11:23.677 } 00:11:23.677 ] 00:11:23.677 }' 00:11:23.677 08:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.677 08:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.940 [2024-10-05 08:48:00.306725] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:23.940 "name": "raid_bdev1", 00:11:23.940 "aliases": [ 00:11:23.940 "bc3166e7-1e55-455e-ace4-a86bb9867274" 00:11:23.940 ], 00:11:23.940 "product_name": "Raid Volume", 00:11:23.940 "block_size": 512, 00:11:23.940 "num_blocks": 253952, 00:11:23.940 "uuid": "bc3166e7-1e55-455e-ace4-a86bb9867274", 00:11:23.940 "assigned_rate_limits": { 00:11:23.940 "rw_ios_per_sec": 0, 00:11:23.940 "rw_mbytes_per_sec": 0, 00:11:23.940 "r_mbytes_per_sec": 0, 00:11:23.940 "w_mbytes_per_sec": 0 00:11:23.940 }, 00:11:23.940 "claimed": false, 00:11:23.940 "zoned": false, 00:11:23.940 "supported_io_types": { 00:11:23.940 "read": true, 00:11:23.940 "write": true, 00:11:23.940 "unmap": true, 00:11:23.940 "flush": true, 00:11:23.940 "reset": true, 00:11:23.940 "nvme_admin": false, 00:11:23.940 "nvme_io": false, 00:11:23.940 "nvme_io_md": false, 00:11:23.940 "write_zeroes": true, 00:11:23.940 "zcopy": false, 00:11:23.940 "get_zone_info": false, 00:11:23.940 "zone_management": false, 00:11:23.940 "zone_append": false, 00:11:23.940 "compare": false, 00:11:23.940 "compare_and_write": false, 00:11:23.940 "abort": false, 00:11:23.940 "seek_hole": false, 00:11:23.940 "seek_data": false, 00:11:23.940 "copy": false, 00:11:23.940 "nvme_iov_md": false 00:11:23.940 }, 00:11:23.940 "memory_domains": [ 00:11:23.940 { 00:11:23.940 "dma_device_id": "system", 00:11:23.940 "dma_device_type": 1 00:11:23.940 }, 00:11:23.940 { 00:11:23.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.940 "dma_device_type": 2 00:11:23.940 }, 00:11:23.940 { 00:11:23.940 "dma_device_id": "system", 00:11:23.940 "dma_device_type": 1 00:11:23.940 }, 00:11:23.940 { 00:11:23.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.940 "dma_device_type": 2 00:11:23.940 }, 00:11:23.940 { 00:11:23.940 "dma_device_id": "system", 00:11:23.940 "dma_device_type": 1 00:11:23.940 }, 00:11:23.940 { 00:11:23.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.940 "dma_device_type": 2 00:11:23.940 }, 00:11:23.940 { 00:11:23.940 "dma_device_id": "system", 00:11:23.940 "dma_device_type": 1 00:11:23.940 }, 00:11:23.940 { 00:11:23.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.940 "dma_device_type": 2 00:11:23.940 } 00:11:23.940 ], 00:11:23.940 "driver_specific": { 00:11:23.940 "raid": { 00:11:23.940 "uuid": "bc3166e7-1e55-455e-ace4-a86bb9867274", 00:11:23.940 "strip_size_kb": 64, 00:11:23.940 "state": "online", 00:11:23.940 "raid_level": "concat", 00:11:23.940 "superblock": true, 00:11:23.940 "num_base_bdevs": 4, 00:11:23.940 "num_base_bdevs_discovered": 4, 00:11:23.940 "num_base_bdevs_operational": 4, 00:11:23.940 "base_bdevs_list": [ 00:11:23.940 { 00:11:23.940 "name": "pt1", 00:11:23.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:23.940 "is_configured": true, 00:11:23.940 "data_offset": 2048, 00:11:23.940 "data_size": 63488 00:11:23.940 }, 00:11:23.940 { 00:11:23.940 "name": "pt2", 00:11:23.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:23.940 "is_configured": true, 00:11:23.940 "data_offset": 2048, 00:11:23.940 "data_size": 63488 00:11:23.940 }, 00:11:23.940 { 00:11:23.940 "name": "pt3", 00:11:23.940 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:23.940 "is_configured": true, 00:11:23.940 "data_offset": 2048, 00:11:23.940 "data_size": 63488 00:11:23.940 }, 00:11:23.940 { 00:11:23.940 "name": "pt4", 00:11:23.940 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:23.940 "is_configured": true, 00:11:23.940 "data_offset": 2048, 00:11:23.940 "data_size": 63488 00:11:23.940 } 00:11:23.940 ] 00:11:23.940 } 00:11:23.940 } 00:11:23.940 }' 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:23.940 pt2 00:11:23.940 pt3 00:11:23.940 pt4' 00:11:23.940 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.210 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.210 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.210 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:24.210 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.210 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.210 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.211 [2024-10-05 08:48:00.634100] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bc3166e7-1e55-455e-ace4-a86bb9867274 '!=' bc3166e7-1e55-455e-ace4-a86bb9867274 ']' 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71108 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71108 ']' 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71108 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:24.211 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71108 00:11:24.471 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:24.471 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:24.471 killing process with pid 71108 00:11:24.471 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71108' 00:11:24.471 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 71108 00:11:24.471 [2024-10-05 08:48:00.695060] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.471 [2024-10-05 08:48:00.695152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.471 08:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 71108 00:11:24.471 [2024-10-05 08:48:00.695235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.471 [2024-10-05 08:48:00.695255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:24.731 [2024-10-05 08:48:01.105053] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.112 08:48:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:26.112 00:11:26.112 real 0m5.827s 00:11:26.112 user 0m8.076s 00:11:26.112 sys 0m1.111s 00:11:26.112 08:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.112 08:48:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.112 ************************************ 00:11:26.112 END TEST raid_superblock_test 00:11:26.112 ************************************ 00:11:26.112 08:48:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:26.112 08:48:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:26.112 08:48:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.112 08:48:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.112 ************************************ 00:11:26.112 START TEST raid_read_error_test 00:11:26.112 ************************************ 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Es4PfGBtq5 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71337 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71337 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 71337 ']' 00:11:26.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.112 08:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.113 08:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.113 08:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.113 08:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.113 08:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.373 [2024-10-05 08:48:02.602026] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:26.373 [2024-10-05 08:48:02.602166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71337 ] 00:11:26.373 [2024-10-05 08:48:02.764487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.633 [2024-10-05 08:48:03.011550] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.893 [2024-10-05 08:48:03.253785] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.893 [2024-10-05 08:48:03.253934] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.154 BaseBdev1_malloc 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.154 true 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.154 [2024-10-05 08:48:03.502511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:27.154 [2024-10-05 08:48:03.502661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.154 [2024-10-05 08:48:03.502695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:27.154 [2024-10-05 08:48:03.502728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.154 [2024-10-05 08:48:03.505158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.154 [2024-10-05 08:48:03.505236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:27.154 BaseBdev1 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.154 BaseBdev2_malloc 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.154 true 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.154 [2024-10-05 08:48:03.603453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:27.154 [2024-10-05 08:48:03.603511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.154 [2024-10-05 08:48:03.603527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:27.154 [2024-10-05 08:48:03.603538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.154 [2024-10-05 08:48:03.605932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.154 [2024-10-05 08:48:03.606059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:27.154 BaseBdev2 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.154 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.415 BaseBdev3_malloc 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.415 true 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.415 [2024-10-05 08:48:03.676513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:27.415 [2024-10-05 08:48:03.676635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.415 [2024-10-05 08:48:03.676655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:27.415 [2024-10-05 08:48:03.676666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.415 [2024-10-05 08:48:03.679035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.415 [2024-10-05 08:48:03.679068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:27.415 BaseBdev3 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.415 BaseBdev4_malloc 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.415 true 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.415 [2024-10-05 08:48:03.747965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:27.415 [2024-10-05 08:48:03.748025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.415 [2024-10-05 08:48:03.748042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:27.415 [2024-10-05 08:48:03.748055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.415 [2024-10-05 08:48:03.750405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.415 [2024-10-05 08:48:03.750444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:27.415 BaseBdev4 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.415 [2024-10-05 08:48:03.760040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.415 [2024-10-05 08:48:03.762104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.415 [2024-10-05 08:48:03.762179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.415 [2024-10-05 08:48:03.762236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:27.415 [2024-10-05 08:48:03.762450] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:27.415 [2024-10-05 08:48:03.762464] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:27.415 [2024-10-05 08:48:03.762703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:27.415 [2024-10-05 08:48:03.762866] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:27.415 [2024-10-05 08:48:03.762875] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:27.415 [2024-10-05 08:48:03.763037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.415 "name": "raid_bdev1", 00:11:27.415 "uuid": "bdf13a68-adee-4a57-a813-103525d3be61", 00:11:27.415 "strip_size_kb": 64, 00:11:27.415 "state": "online", 00:11:27.415 "raid_level": "concat", 00:11:27.415 "superblock": true, 00:11:27.415 "num_base_bdevs": 4, 00:11:27.415 "num_base_bdevs_discovered": 4, 00:11:27.415 "num_base_bdevs_operational": 4, 00:11:27.415 "base_bdevs_list": [ 00:11:27.415 { 00:11:27.415 "name": "BaseBdev1", 00:11:27.415 "uuid": "ca8a30b5-e0d4-5ba1-aa23-4611fcd66fcc", 00:11:27.415 "is_configured": true, 00:11:27.415 "data_offset": 2048, 00:11:27.415 "data_size": 63488 00:11:27.415 }, 00:11:27.415 { 00:11:27.415 "name": "BaseBdev2", 00:11:27.415 "uuid": "5e8a9e77-d708-5e6e-926d-d4456bd8b010", 00:11:27.415 "is_configured": true, 00:11:27.415 "data_offset": 2048, 00:11:27.415 "data_size": 63488 00:11:27.415 }, 00:11:27.415 { 00:11:27.415 "name": "BaseBdev3", 00:11:27.415 "uuid": "94995d83-d7b4-5f48-966f-cfda80b5bcda", 00:11:27.415 "is_configured": true, 00:11:27.415 "data_offset": 2048, 00:11:27.415 "data_size": 63488 00:11:27.415 }, 00:11:27.415 { 00:11:27.415 "name": "BaseBdev4", 00:11:27.415 "uuid": "0a9132f9-9590-5374-8b17-64f045410c7d", 00:11:27.415 "is_configured": true, 00:11:27.415 "data_offset": 2048, 00:11:27.415 "data_size": 63488 00:11:27.415 } 00:11:27.415 ] 00:11:27.415 }' 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.415 08:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.988 08:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:27.988 08:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:27.988 [2024-10-05 08:48:04.272482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.934 "name": "raid_bdev1", 00:11:28.934 "uuid": "bdf13a68-adee-4a57-a813-103525d3be61", 00:11:28.934 "strip_size_kb": 64, 00:11:28.934 "state": "online", 00:11:28.934 "raid_level": "concat", 00:11:28.934 "superblock": true, 00:11:28.934 "num_base_bdevs": 4, 00:11:28.934 "num_base_bdevs_discovered": 4, 00:11:28.934 "num_base_bdevs_operational": 4, 00:11:28.934 "base_bdevs_list": [ 00:11:28.934 { 00:11:28.934 "name": "BaseBdev1", 00:11:28.934 "uuid": "ca8a30b5-e0d4-5ba1-aa23-4611fcd66fcc", 00:11:28.934 "is_configured": true, 00:11:28.934 "data_offset": 2048, 00:11:28.934 "data_size": 63488 00:11:28.934 }, 00:11:28.934 { 00:11:28.934 "name": "BaseBdev2", 00:11:28.934 "uuid": "5e8a9e77-d708-5e6e-926d-d4456bd8b010", 00:11:28.934 "is_configured": true, 00:11:28.934 "data_offset": 2048, 00:11:28.934 "data_size": 63488 00:11:28.934 }, 00:11:28.934 { 00:11:28.934 "name": "BaseBdev3", 00:11:28.934 "uuid": "94995d83-d7b4-5f48-966f-cfda80b5bcda", 00:11:28.934 "is_configured": true, 00:11:28.934 "data_offset": 2048, 00:11:28.934 "data_size": 63488 00:11:28.934 }, 00:11:28.934 { 00:11:28.934 "name": "BaseBdev4", 00:11:28.934 "uuid": "0a9132f9-9590-5374-8b17-64f045410c7d", 00:11:28.934 "is_configured": true, 00:11:28.934 "data_offset": 2048, 00:11:28.934 "data_size": 63488 00:11:28.934 } 00:11:28.934 ] 00:11:28.934 }' 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.934 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.195 [2024-10-05 08:48:05.568409] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:29.195 [2024-10-05 08:48:05.568533] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.195 [2024-10-05 08:48:05.571134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.195 [2024-10-05 08:48:05.571254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.195 [2024-10-05 08:48:05.571323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.195 [2024-10-05 08:48:05.571375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.195 { 00:11:29.195 "results": [ 00:11:29.195 { 00:11:29.195 "job": "raid_bdev1", 00:11:29.195 "core_mask": "0x1", 00:11:29.195 "workload": "randrw", 00:11:29.195 "percentage": 50, 00:11:29.195 "status": "finished", 00:11:29.195 "queue_depth": 1, 00:11:29.195 "io_size": 131072, 00:11:29.195 "runtime": 1.296452, 00:11:29.195 "iops": 14027.515095044013, 00:11:29.195 "mibps": 1753.4393868805016, 00:11:29.195 "io_failed": 1, 00:11:29.195 "io_timeout": 0, 00:11:29.195 "avg_latency_us": 100.57888366444385, 00:11:29.195 "min_latency_us": 24.482096069868994, 00:11:29.195 "max_latency_us": 1395.1441048034935 00:11:29.195 } 00:11:29.195 ], 00:11:29.195 "core_count": 1 00:11:29.195 } 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71337 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 71337 ']' 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 71337 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71337 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71337' 00:11:29.195 killing process with pid 71337 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 71337 00:11:29.195 [2024-10-05 08:48:05.605446] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.195 08:48:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 71337 00:11:29.764 [2024-10-05 08:48:05.953316] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:31.139 08:48:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Es4PfGBtq5 00:11:31.139 08:48:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:31.139 08:48:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:31.139 08:48:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:11:31.139 08:48:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:31.139 ************************************ 00:11:31.139 END TEST raid_read_error_test 00:11:31.139 ************************************ 00:11:31.139 08:48:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:31.139 08:48:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:31.139 08:48:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:11:31.139 00:11:31.139 real 0m4.883s 00:11:31.139 user 0m5.476s 00:11:31.139 sys 0m0.704s 00:11:31.139 08:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.139 08:48:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.139 08:48:07 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:31.139 08:48:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:31.139 08:48:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.139 08:48:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:31.139 ************************************ 00:11:31.139 START TEST raid_write_error_test 00:11:31.139 ************************************ 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.139 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TLK0IyqOBN 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71458 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71458 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71458 ']' 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.140 08:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.140 [2024-10-05 08:48:07.562313] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:31.140 [2024-10-05 08:48:07.562438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71458 ] 00:11:31.398 [2024-10-05 08:48:07.733070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.657 [2024-10-05 08:48:07.983872] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.916 [2024-10-05 08:48:08.219626] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.916 [2024-10-05 08:48:08.219664] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.174 BaseBdev1_malloc 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.174 true 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.174 [2024-10-05 08:48:08.456997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:32.174 [2024-10-05 08:48:08.457118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.174 [2024-10-05 08:48:08.457142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:32.174 [2024-10-05 08:48:08.457154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.174 [2024-10-05 08:48:08.459522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.174 [2024-10-05 08:48:08.459562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:32.174 BaseBdev1 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.174 BaseBdev2_malloc 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.174 true 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.174 [2024-10-05 08:48:08.545672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:32.174 [2024-10-05 08:48:08.545789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.174 [2024-10-05 08:48:08.545812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:32.174 [2024-10-05 08:48:08.545823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.174 [2024-10-05 08:48:08.548291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.174 [2024-10-05 08:48:08.548329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:32.174 BaseBdev2 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.174 BaseBdev3_malloc 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.174 true 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.174 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:32.175 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.175 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.175 [2024-10-05 08:48:08.619267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:32.175 [2024-10-05 08:48:08.619322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.175 [2024-10-05 08:48:08.619338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:32.175 [2024-10-05 08:48:08.619349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.175 [2024-10-05 08:48:08.621762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.175 [2024-10-05 08:48:08.621801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:32.175 BaseBdev3 00:11:32.175 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.175 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.175 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:32.175 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.175 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.434 BaseBdev4_malloc 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.434 true 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.434 [2024-10-05 08:48:08.690872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:32.434 [2024-10-05 08:48:08.690927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.434 [2024-10-05 08:48:08.690944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:32.434 [2024-10-05 08:48:08.690966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.434 [2024-10-05 08:48:08.693361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.434 [2024-10-05 08:48:08.693399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:32.434 BaseBdev4 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.434 [2024-10-05 08:48:08.702937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.434 [2024-10-05 08:48:08.705028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.434 [2024-10-05 08:48:08.705102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.434 [2024-10-05 08:48:08.705160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:32.434 [2024-10-05 08:48:08.705390] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:32.434 [2024-10-05 08:48:08.705411] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:32.434 [2024-10-05 08:48:08.705653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:32.434 [2024-10-05 08:48:08.705812] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:32.434 [2024-10-05 08:48:08.705821] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:32.434 [2024-10-05 08:48:08.706003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.434 "name": "raid_bdev1", 00:11:32.434 "uuid": "af9f7ebe-bc7b-4a19-a0a9-03e1e3b8e250", 00:11:32.434 "strip_size_kb": 64, 00:11:32.434 "state": "online", 00:11:32.434 "raid_level": "concat", 00:11:32.434 "superblock": true, 00:11:32.434 "num_base_bdevs": 4, 00:11:32.434 "num_base_bdevs_discovered": 4, 00:11:32.434 "num_base_bdevs_operational": 4, 00:11:32.434 "base_bdevs_list": [ 00:11:32.434 { 00:11:32.434 "name": "BaseBdev1", 00:11:32.434 "uuid": "0568a6c4-61ab-5a25-9d51-5b97539f876b", 00:11:32.434 "is_configured": true, 00:11:32.434 "data_offset": 2048, 00:11:32.434 "data_size": 63488 00:11:32.434 }, 00:11:32.434 { 00:11:32.434 "name": "BaseBdev2", 00:11:32.434 "uuid": "f51f201c-26fd-5814-b0b0-ec2edb6d3c9c", 00:11:32.434 "is_configured": true, 00:11:32.434 "data_offset": 2048, 00:11:32.434 "data_size": 63488 00:11:32.434 }, 00:11:32.434 { 00:11:32.434 "name": "BaseBdev3", 00:11:32.434 "uuid": "6aae9081-c3ee-54a3-82af-c8cb165708c9", 00:11:32.434 "is_configured": true, 00:11:32.434 "data_offset": 2048, 00:11:32.434 "data_size": 63488 00:11:32.434 }, 00:11:32.434 { 00:11:32.434 "name": "BaseBdev4", 00:11:32.434 "uuid": "9f954836-2d2a-5342-9d6a-a0dec2a6bd72", 00:11:32.434 "is_configured": true, 00:11:32.434 "data_offset": 2048, 00:11:32.434 "data_size": 63488 00:11:32.434 } 00:11:32.434 ] 00:11:32.434 }' 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.434 08:48:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.694 08:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:32.694 08:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:32.975 [2024-10-05 08:48:09.235512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:33.914 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.915 "name": "raid_bdev1", 00:11:33.915 "uuid": "af9f7ebe-bc7b-4a19-a0a9-03e1e3b8e250", 00:11:33.915 "strip_size_kb": 64, 00:11:33.915 "state": "online", 00:11:33.915 "raid_level": "concat", 00:11:33.915 "superblock": true, 00:11:33.915 "num_base_bdevs": 4, 00:11:33.915 "num_base_bdevs_discovered": 4, 00:11:33.915 "num_base_bdevs_operational": 4, 00:11:33.915 "base_bdevs_list": [ 00:11:33.915 { 00:11:33.915 "name": "BaseBdev1", 00:11:33.915 "uuid": "0568a6c4-61ab-5a25-9d51-5b97539f876b", 00:11:33.915 "is_configured": true, 00:11:33.915 "data_offset": 2048, 00:11:33.915 "data_size": 63488 00:11:33.915 }, 00:11:33.915 { 00:11:33.915 "name": "BaseBdev2", 00:11:33.915 "uuid": "f51f201c-26fd-5814-b0b0-ec2edb6d3c9c", 00:11:33.915 "is_configured": true, 00:11:33.915 "data_offset": 2048, 00:11:33.915 "data_size": 63488 00:11:33.915 }, 00:11:33.915 { 00:11:33.915 "name": "BaseBdev3", 00:11:33.915 "uuid": "6aae9081-c3ee-54a3-82af-c8cb165708c9", 00:11:33.915 "is_configured": true, 00:11:33.915 "data_offset": 2048, 00:11:33.915 "data_size": 63488 00:11:33.915 }, 00:11:33.915 { 00:11:33.915 "name": "BaseBdev4", 00:11:33.915 "uuid": "9f954836-2d2a-5342-9d6a-a0dec2a6bd72", 00:11:33.915 "is_configured": true, 00:11:33.915 "data_offset": 2048, 00:11:33.915 "data_size": 63488 00:11:33.915 } 00:11:33.915 ] 00:11:33.915 }' 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.915 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.175 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:34.175 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.175 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.175 [2024-10-05 08:48:10.616422] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.175 [2024-10-05 08:48:10.616468] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.175 [2024-10-05 08:48:10.619124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.175 [2024-10-05 08:48:10.619188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.175 [2024-10-05 08:48:10.619237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.175 [2024-10-05 08:48:10.619250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:34.175 { 00:11:34.175 "results": [ 00:11:34.175 { 00:11:34.175 "job": "raid_bdev1", 00:11:34.175 "core_mask": "0x1", 00:11:34.175 "workload": "randrw", 00:11:34.175 "percentage": 50, 00:11:34.175 "status": "finished", 00:11:34.175 "queue_depth": 1, 00:11:34.175 "io_size": 131072, 00:11:34.175 "runtime": 1.381404, 00:11:34.175 "iops": 13899.626756546239, 00:11:34.175 "mibps": 1737.4533445682798, 00:11:34.175 "io_failed": 1, 00:11:34.175 "io_timeout": 0, 00:11:34.175 "avg_latency_us": 101.45507659546017, 00:11:34.175 "min_latency_us": 24.817467248908297, 00:11:34.175 "max_latency_us": 1430.9170305676855 00:11:34.175 } 00:11:34.175 ], 00:11:34.175 "core_count": 1 00:11:34.175 } 00:11:34.175 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.175 08:48:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71458 00:11:34.175 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71458 ']' 00:11:34.175 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71458 00:11:34.175 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:34.175 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.175 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71458 00:11:34.436 killing process with pid 71458 00:11:34.436 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:34.436 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:34.436 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71458' 00:11:34.436 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71458 00:11:34.436 [2024-10-05 08:48:10.660425] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.436 08:48:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71458 00:11:34.696 [2024-10-05 08:48:11.008894] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.079 08:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TLK0IyqOBN 00:11:36.079 08:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:36.079 08:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:36.079 08:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:36.079 08:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:36.079 08:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.079 08:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:36.079 08:48:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:36.079 ************************************ 00:11:36.079 END TEST raid_write_error_test 00:11:36.079 ************************************ 00:11:36.079 00:11:36.079 real 0m4.974s 00:11:36.079 user 0m5.645s 00:11:36.079 sys 0m0.755s 00:11:36.079 08:48:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:36.079 08:48:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.079 08:48:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:36.079 08:48:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:36.079 08:48:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:36.079 08:48:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.079 08:48:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.079 ************************************ 00:11:36.079 START TEST raid_state_function_test 00:11:36.079 ************************************ 00:11:36.079 08:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:11:36.079 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:36.079 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:36.079 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:36.079 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:36.079 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:36.079 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:36.079 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:36.079 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:36.079 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:36.079 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71572 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71572' 00:11:36.080 Process raid pid: 71572 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71572 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71572 ']' 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.080 08:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.340 [2024-10-05 08:48:12.606343] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:36.340 [2024-10-05 08:48:12.607027] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.340 [2024-10-05 08:48:12.778371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.599 [2024-10-05 08:48:13.032457] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.859 [2024-10-05 08:48:13.266480] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.859 [2024-10-05 08:48:13.266621] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.119 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:37.119 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:37.119 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.120 [2024-10-05 08:48:13.435684] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.120 [2024-10-05 08:48:13.435752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.120 [2024-10-05 08:48:13.435762] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:37.120 [2024-10-05 08:48:13.435773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:37.120 [2024-10-05 08:48:13.435779] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:37.120 [2024-10-05 08:48:13.435788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:37.120 [2024-10-05 08:48:13.435794] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:37.120 [2024-10-05 08:48:13.435803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.120 "name": "Existed_Raid", 00:11:37.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.120 "strip_size_kb": 0, 00:11:37.120 "state": "configuring", 00:11:37.120 "raid_level": "raid1", 00:11:37.120 "superblock": false, 00:11:37.120 "num_base_bdevs": 4, 00:11:37.120 "num_base_bdevs_discovered": 0, 00:11:37.120 "num_base_bdevs_operational": 4, 00:11:37.120 "base_bdevs_list": [ 00:11:37.120 { 00:11:37.120 "name": "BaseBdev1", 00:11:37.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.120 "is_configured": false, 00:11:37.120 "data_offset": 0, 00:11:37.120 "data_size": 0 00:11:37.120 }, 00:11:37.120 { 00:11:37.120 "name": "BaseBdev2", 00:11:37.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.120 "is_configured": false, 00:11:37.120 "data_offset": 0, 00:11:37.120 "data_size": 0 00:11:37.120 }, 00:11:37.120 { 00:11:37.120 "name": "BaseBdev3", 00:11:37.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.120 "is_configured": false, 00:11:37.120 "data_offset": 0, 00:11:37.120 "data_size": 0 00:11:37.120 }, 00:11:37.120 { 00:11:37.120 "name": "BaseBdev4", 00:11:37.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.120 "is_configured": false, 00:11:37.120 "data_offset": 0, 00:11:37.120 "data_size": 0 00:11:37.120 } 00:11:37.120 ] 00:11:37.120 }' 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.120 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.380 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:37.380 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.380 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.380 [2024-10-05 08:48:13.802950] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:37.380 [2024-10-05 08:48:13.803064] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:37.380 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.380 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:37.380 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.380 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.380 [2024-10-05 08:48:13.814966] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.380 [2024-10-05 08:48:13.815047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.380 [2024-10-05 08:48:13.815075] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:37.380 [2024-10-05 08:48:13.815098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:37.380 [2024-10-05 08:48:13.815116] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:37.380 [2024-10-05 08:48:13.815137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:37.380 [2024-10-05 08:48:13.815155] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:37.380 [2024-10-05 08:48:13.815176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:37.380 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.380 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:37.380 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.380 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.641 [2024-10-05 08:48:13.878579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.641 BaseBdev1 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.641 [ 00:11:37.641 { 00:11:37.641 "name": "BaseBdev1", 00:11:37.641 "aliases": [ 00:11:37.641 "abe3ab9d-5445-4ce3-8478-ab6f83feda86" 00:11:37.641 ], 00:11:37.641 "product_name": "Malloc disk", 00:11:37.641 "block_size": 512, 00:11:37.641 "num_blocks": 65536, 00:11:37.641 "uuid": "abe3ab9d-5445-4ce3-8478-ab6f83feda86", 00:11:37.641 "assigned_rate_limits": { 00:11:37.641 "rw_ios_per_sec": 0, 00:11:37.641 "rw_mbytes_per_sec": 0, 00:11:37.641 "r_mbytes_per_sec": 0, 00:11:37.641 "w_mbytes_per_sec": 0 00:11:37.641 }, 00:11:37.641 "claimed": true, 00:11:37.641 "claim_type": "exclusive_write", 00:11:37.641 "zoned": false, 00:11:37.641 "supported_io_types": { 00:11:37.641 "read": true, 00:11:37.641 "write": true, 00:11:37.641 "unmap": true, 00:11:37.641 "flush": true, 00:11:37.641 "reset": true, 00:11:37.641 "nvme_admin": false, 00:11:37.641 "nvme_io": false, 00:11:37.641 "nvme_io_md": false, 00:11:37.641 "write_zeroes": true, 00:11:37.641 "zcopy": true, 00:11:37.641 "get_zone_info": false, 00:11:37.641 "zone_management": false, 00:11:37.641 "zone_append": false, 00:11:37.641 "compare": false, 00:11:37.641 "compare_and_write": false, 00:11:37.641 "abort": true, 00:11:37.641 "seek_hole": false, 00:11:37.641 "seek_data": false, 00:11:37.641 "copy": true, 00:11:37.641 "nvme_iov_md": false 00:11:37.641 }, 00:11:37.641 "memory_domains": [ 00:11:37.641 { 00:11:37.641 "dma_device_id": "system", 00:11:37.641 "dma_device_type": 1 00:11:37.641 }, 00:11:37.641 { 00:11:37.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.641 "dma_device_type": 2 00:11:37.641 } 00:11:37.641 ], 00:11:37.641 "driver_specific": {} 00:11:37.641 } 00:11:37.641 ] 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.641 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.641 "name": "Existed_Raid", 00:11:37.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.641 "strip_size_kb": 0, 00:11:37.641 "state": "configuring", 00:11:37.641 "raid_level": "raid1", 00:11:37.641 "superblock": false, 00:11:37.641 "num_base_bdevs": 4, 00:11:37.641 "num_base_bdevs_discovered": 1, 00:11:37.641 "num_base_bdevs_operational": 4, 00:11:37.641 "base_bdevs_list": [ 00:11:37.641 { 00:11:37.641 "name": "BaseBdev1", 00:11:37.641 "uuid": "abe3ab9d-5445-4ce3-8478-ab6f83feda86", 00:11:37.641 "is_configured": true, 00:11:37.641 "data_offset": 0, 00:11:37.641 "data_size": 65536 00:11:37.641 }, 00:11:37.641 { 00:11:37.641 "name": "BaseBdev2", 00:11:37.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.641 "is_configured": false, 00:11:37.641 "data_offset": 0, 00:11:37.641 "data_size": 0 00:11:37.641 }, 00:11:37.641 { 00:11:37.641 "name": "BaseBdev3", 00:11:37.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.641 "is_configured": false, 00:11:37.641 "data_offset": 0, 00:11:37.641 "data_size": 0 00:11:37.641 }, 00:11:37.641 { 00:11:37.642 "name": "BaseBdev4", 00:11:37.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.642 "is_configured": false, 00:11:37.642 "data_offset": 0, 00:11:37.642 "data_size": 0 00:11:37.642 } 00:11:37.642 ] 00:11:37.642 }' 00:11:37.642 08:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.642 08:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.902 [2024-10-05 08:48:14.261974] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:37.902 [2024-10-05 08:48:14.262092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.902 [2024-10-05 08:48:14.273999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.902 [2024-10-05 08:48:14.276101] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:37.902 [2024-10-05 08:48:14.276146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:37.902 [2024-10-05 08:48:14.276157] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:37.902 [2024-10-05 08:48:14.276168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:37.902 [2024-10-05 08:48:14.276175] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:37.902 [2024-10-05 08:48:14.276183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.902 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.902 "name": "Existed_Raid", 00:11:37.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.902 "strip_size_kb": 0, 00:11:37.902 "state": "configuring", 00:11:37.902 "raid_level": "raid1", 00:11:37.902 "superblock": false, 00:11:37.902 "num_base_bdevs": 4, 00:11:37.902 "num_base_bdevs_discovered": 1, 00:11:37.902 "num_base_bdevs_operational": 4, 00:11:37.902 "base_bdevs_list": [ 00:11:37.902 { 00:11:37.902 "name": "BaseBdev1", 00:11:37.902 "uuid": "abe3ab9d-5445-4ce3-8478-ab6f83feda86", 00:11:37.902 "is_configured": true, 00:11:37.902 "data_offset": 0, 00:11:37.902 "data_size": 65536 00:11:37.902 }, 00:11:37.902 { 00:11:37.902 "name": "BaseBdev2", 00:11:37.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.902 "is_configured": false, 00:11:37.902 "data_offset": 0, 00:11:37.902 "data_size": 0 00:11:37.902 }, 00:11:37.902 { 00:11:37.902 "name": "BaseBdev3", 00:11:37.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.902 "is_configured": false, 00:11:37.902 "data_offset": 0, 00:11:37.902 "data_size": 0 00:11:37.902 }, 00:11:37.902 { 00:11:37.902 "name": "BaseBdev4", 00:11:37.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.903 "is_configured": false, 00:11:37.903 "data_offset": 0, 00:11:37.903 "data_size": 0 00:11:37.903 } 00:11:37.903 ] 00:11:37.903 }' 00:11:37.903 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.903 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.474 [2024-10-05 08:48:14.733182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.474 BaseBdev2 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.474 [ 00:11:38.474 { 00:11:38.474 "name": "BaseBdev2", 00:11:38.474 "aliases": [ 00:11:38.474 "829f5671-b45c-434d-898a-f7af520649e0" 00:11:38.474 ], 00:11:38.474 "product_name": "Malloc disk", 00:11:38.474 "block_size": 512, 00:11:38.474 "num_blocks": 65536, 00:11:38.474 "uuid": "829f5671-b45c-434d-898a-f7af520649e0", 00:11:38.474 "assigned_rate_limits": { 00:11:38.474 "rw_ios_per_sec": 0, 00:11:38.474 "rw_mbytes_per_sec": 0, 00:11:38.474 "r_mbytes_per_sec": 0, 00:11:38.474 "w_mbytes_per_sec": 0 00:11:38.474 }, 00:11:38.474 "claimed": true, 00:11:38.474 "claim_type": "exclusive_write", 00:11:38.474 "zoned": false, 00:11:38.474 "supported_io_types": { 00:11:38.474 "read": true, 00:11:38.474 "write": true, 00:11:38.474 "unmap": true, 00:11:38.474 "flush": true, 00:11:38.474 "reset": true, 00:11:38.474 "nvme_admin": false, 00:11:38.474 "nvme_io": false, 00:11:38.474 "nvme_io_md": false, 00:11:38.474 "write_zeroes": true, 00:11:38.474 "zcopy": true, 00:11:38.474 "get_zone_info": false, 00:11:38.474 "zone_management": false, 00:11:38.474 "zone_append": false, 00:11:38.474 "compare": false, 00:11:38.474 "compare_and_write": false, 00:11:38.474 "abort": true, 00:11:38.474 "seek_hole": false, 00:11:38.474 "seek_data": false, 00:11:38.474 "copy": true, 00:11:38.474 "nvme_iov_md": false 00:11:38.474 }, 00:11:38.474 "memory_domains": [ 00:11:38.474 { 00:11:38.474 "dma_device_id": "system", 00:11:38.474 "dma_device_type": 1 00:11:38.474 }, 00:11:38.474 { 00:11:38.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.474 "dma_device_type": 2 00:11:38.474 } 00:11:38.474 ], 00:11:38.474 "driver_specific": {} 00:11:38.474 } 00:11:38.474 ] 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.474 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.475 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.475 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.475 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.475 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.475 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.475 "name": "Existed_Raid", 00:11:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.475 "strip_size_kb": 0, 00:11:38.475 "state": "configuring", 00:11:38.475 "raid_level": "raid1", 00:11:38.475 "superblock": false, 00:11:38.475 "num_base_bdevs": 4, 00:11:38.475 "num_base_bdevs_discovered": 2, 00:11:38.475 "num_base_bdevs_operational": 4, 00:11:38.475 "base_bdevs_list": [ 00:11:38.475 { 00:11:38.475 "name": "BaseBdev1", 00:11:38.475 "uuid": "abe3ab9d-5445-4ce3-8478-ab6f83feda86", 00:11:38.475 "is_configured": true, 00:11:38.475 "data_offset": 0, 00:11:38.475 "data_size": 65536 00:11:38.475 }, 00:11:38.475 { 00:11:38.475 "name": "BaseBdev2", 00:11:38.475 "uuid": "829f5671-b45c-434d-898a-f7af520649e0", 00:11:38.475 "is_configured": true, 00:11:38.475 "data_offset": 0, 00:11:38.475 "data_size": 65536 00:11:38.475 }, 00:11:38.475 { 00:11:38.475 "name": "BaseBdev3", 00:11:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.475 "is_configured": false, 00:11:38.475 "data_offset": 0, 00:11:38.475 "data_size": 0 00:11:38.475 }, 00:11:38.475 { 00:11:38.475 "name": "BaseBdev4", 00:11:38.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.475 "is_configured": false, 00:11:38.475 "data_offset": 0, 00:11:38.475 "data_size": 0 00:11:38.475 } 00:11:38.475 ] 00:11:38.475 }' 00:11:38.475 08:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.475 08:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.045 [2024-10-05 08:48:15.254691] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.045 BaseBdev3 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.045 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.045 [ 00:11:39.045 { 00:11:39.045 "name": "BaseBdev3", 00:11:39.046 "aliases": [ 00:11:39.046 "df90e545-73c7-4a67-992b-4f1cdec86a59" 00:11:39.046 ], 00:11:39.046 "product_name": "Malloc disk", 00:11:39.046 "block_size": 512, 00:11:39.046 "num_blocks": 65536, 00:11:39.046 "uuid": "df90e545-73c7-4a67-992b-4f1cdec86a59", 00:11:39.046 "assigned_rate_limits": { 00:11:39.046 "rw_ios_per_sec": 0, 00:11:39.046 "rw_mbytes_per_sec": 0, 00:11:39.046 "r_mbytes_per_sec": 0, 00:11:39.046 "w_mbytes_per_sec": 0 00:11:39.046 }, 00:11:39.046 "claimed": true, 00:11:39.046 "claim_type": "exclusive_write", 00:11:39.046 "zoned": false, 00:11:39.046 "supported_io_types": { 00:11:39.046 "read": true, 00:11:39.046 "write": true, 00:11:39.046 "unmap": true, 00:11:39.046 "flush": true, 00:11:39.046 "reset": true, 00:11:39.046 "nvme_admin": false, 00:11:39.046 "nvme_io": false, 00:11:39.046 "nvme_io_md": false, 00:11:39.046 "write_zeroes": true, 00:11:39.046 "zcopy": true, 00:11:39.046 "get_zone_info": false, 00:11:39.046 "zone_management": false, 00:11:39.046 "zone_append": false, 00:11:39.046 "compare": false, 00:11:39.046 "compare_and_write": false, 00:11:39.046 "abort": true, 00:11:39.046 "seek_hole": false, 00:11:39.046 "seek_data": false, 00:11:39.046 "copy": true, 00:11:39.046 "nvme_iov_md": false 00:11:39.046 }, 00:11:39.046 "memory_domains": [ 00:11:39.046 { 00:11:39.046 "dma_device_id": "system", 00:11:39.046 "dma_device_type": 1 00:11:39.046 }, 00:11:39.046 { 00:11:39.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.046 "dma_device_type": 2 00:11:39.046 } 00:11:39.046 ], 00:11:39.046 "driver_specific": {} 00:11:39.046 } 00:11:39.046 ] 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.046 "name": "Existed_Raid", 00:11:39.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.046 "strip_size_kb": 0, 00:11:39.046 "state": "configuring", 00:11:39.046 "raid_level": "raid1", 00:11:39.046 "superblock": false, 00:11:39.046 "num_base_bdevs": 4, 00:11:39.046 "num_base_bdevs_discovered": 3, 00:11:39.046 "num_base_bdevs_operational": 4, 00:11:39.046 "base_bdevs_list": [ 00:11:39.046 { 00:11:39.046 "name": "BaseBdev1", 00:11:39.046 "uuid": "abe3ab9d-5445-4ce3-8478-ab6f83feda86", 00:11:39.046 "is_configured": true, 00:11:39.046 "data_offset": 0, 00:11:39.046 "data_size": 65536 00:11:39.046 }, 00:11:39.046 { 00:11:39.046 "name": "BaseBdev2", 00:11:39.046 "uuid": "829f5671-b45c-434d-898a-f7af520649e0", 00:11:39.046 "is_configured": true, 00:11:39.046 "data_offset": 0, 00:11:39.046 "data_size": 65536 00:11:39.046 }, 00:11:39.046 { 00:11:39.046 "name": "BaseBdev3", 00:11:39.046 "uuid": "df90e545-73c7-4a67-992b-4f1cdec86a59", 00:11:39.046 "is_configured": true, 00:11:39.046 "data_offset": 0, 00:11:39.046 "data_size": 65536 00:11:39.046 }, 00:11:39.046 { 00:11:39.046 "name": "BaseBdev4", 00:11:39.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.046 "is_configured": false, 00:11:39.046 "data_offset": 0, 00:11:39.046 "data_size": 0 00:11:39.046 } 00:11:39.046 ] 00:11:39.046 }' 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.046 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.306 [2024-10-05 08:48:15.766174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:39.306 [2024-10-05 08:48:15.766337] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:39.306 [2024-10-05 08:48:15.766368] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:39.306 [2024-10-05 08:48:15.766740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:39.306 [2024-10-05 08:48:15.767003] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:39.306 [2024-10-05 08:48:15.767056] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:39.306 [2024-10-05 08:48:15.767395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.306 BaseBdev4 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.306 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.567 [ 00:11:39.567 { 00:11:39.567 "name": "BaseBdev4", 00:11:39.567 "aliases": [ 00:11:39.567 "8100b515-c762-4026-9420-204e98e69edd" 00:11:39.567 ], 00:11:39.567 "product_name": "Malloc disk", 00:11:39.567 "block_size": 512, 00:11:39.567 "num_blocks": 65536, 00:11:39.567 "uuid": "8100b515-c762-4026-9420-204e98e69edd", 00:11:39.567 "assigned_rate_limits": { 00:11:39.567 "rw_ios_per_sec": 0, 00:11:39.567 "rw_mbytes_per_sec": 0, 00:11:39.567 "r_mbytes_per_sec": 0, 00:11:39.567 "w_mbytes_per_sec": 0 00:11:39.567 }, 00:11:39.567 "claimed": true, 00:11:39.567 "claim_type": "exclusive_write", 00:11:39.567 "zoned": false, 00:11:39.567 "supported_io_types": { 00:11:39.567 "read": true, 00:11:39.567 "write": true, 00:11:39.567 "unmap": true, 00:11:39.567 "flush": true, 00:11:39.567 "reset": true, 00:11:39.567 "nvme_admin": false, 00:11:39.567 "nvme_io": false, 00:11:39.567 "nvme_io_md": false, 00:11:39.567 "write_zeroes": true, 00:11:39.567 "zcopy": true, 00:11:39.567 "get_zone_info": false, 00:11:39.567 "zone_management": false, 00:11:39.567 "zone_append": false, 00:11:39.567 "compare": false, 00:11:39.567 "compare_and_write": false, 00:11:39.567 "abort": true, 00:11:39.567 "seek_hole": false, 00:11:39.567 "seek_data": false, 00:11:39.567 "copy": true, 00:11:39.567 "nvme_iov_md": false 00:11:39.567 }, 00:11:39.567 "memory_domains": [ 00:11:39.567 { 00:11:39.567 "dma_device_id": "system", 00:11:39.567 "dma_device_type": 1 00:11:39.567 }, 00:11:39.567 { 00:11:39.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.567 "dma_device_type": 2 00:11:39.567 } 00:11:39.567 ], 00:11:39.567 "driver_specific": {} 00:11:39.567 } 00:11:39.567 ] 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.567 "name": "Existed_Raid", 00:11:39.567 "uuid": "0bf1138d-b174-4d17-a6f2-2959661b5844", 00:11:39.567 "strip_size_kb": 0, 00:11:39.567 "state": "online", 00:11:39.567 "raid_level": "raid1", 00:11:39.567 "superblock": false, 00:11:39.567 "num_base_bdevs": 4, 00:11:39.567 "num_base_bdevs_discovered": 4, 00:11:39.567 "num_base_bdevs_operational": 4, 00:11:39.567 "base_bdevs_list": [ 00:11:39.567 { 00:11:39.567 "name": "BaseBdev1", 00:11:39.567 "uuid": "abe3ab9d-5445-4ce3-8478-ab6f83feda86", 00:11:39.567 "is_configured": true, 00:11:39.567 "data_offset": 0, 00:11:39.567 "data_size": 65536 00:11:39.567 }, 00:11:39.567 { 00:11:39.567 "name": "BaseBdev2", 00:11:39.567 "uuid": "829f5671-b45c-434d-898a-f7af520649e0", 00:11:39.567 "is_configured": true, 00:11:39.567 "data_offset": 0, 00:11:39.567 "data_size": 65536 00:11:39.567 }, 00:11:39.567 { 00:11:39.567 "name": "BaseBdev3", 00:11:39.567 "uuid": "df90e545-73c7-4a67-992b-4f1cdec86a59", 00:11:39.567 "is_configured": true, 00:11:39.567 "data_offset": 0, 00:11:39.567 "data_size": 65536 00:11:39.567 }, 00:11:39.567 { 00:11:39.567 "name": "BaseBdev4", 00:11:39.567 "uuid": "8100b515-c762-4026-9420-204e98e69edd", 00:11:39.567 "is_configured": true, 00:11:39.567 "data_offset": 0, 00:11:39.567 "data_size": 65536 00:11:39.567 } 00:11:39.567 ] 00:11:39.567 }' 00:11:39.567 08:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.568 08:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.828 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:39.828 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:39.828 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:39.828 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:39.828 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:39.828 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:39.828 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:39.828 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.828 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:39.828 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.828 [2024-10-05 08:48:16.189857] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.828 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.828 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:39.828 "name": "Existed_Raid", 00:11:39.828 "aliases": [ 00:11:39.828 "0bf1138d-b174-4d17-a6f2-2959661b5844" 00:11:39.828 ], 00:11:39.828 "product_name": "Raid Volume", 00:11:39.828 "block_size": 512, 00:11:39.828 "num_blocks": 65536, 00:11:39.828 "uuid": "0bf1138d-b174-4d17-a6f2-2959661b5844", 00:11:39.828 "assigned_rate_limits": { 00:11:39.828 "rw_ios_per_sec": 0, 00:11:39.828 "rw_mbytes_per_sec": 0, 00:11:39.828 "r_mbytes_per_sec": 0, 00:11:39.828 "w_mbytes_per_sec": 0 00:11:39.828 }, 00:11:39.828 "claimed": false, 00:11:39.828 "zoned": false, 00:11:39.828 "supported_io_types": { 00:11:39.828 "read": true, 00:11:39.828 "write": true, 00:11:39.828 "unmap": false, 00:11:39.828 "flush": false, 00:11:39.828 "reset": true, 00:11:39.828 "nvme_admin": false, 00:11:39.828 "nvme_io": false, 00:11:39.828 "nvme_io_md": false, 00:11:39.828 "write_zeroes": true, 00:11:39.828 "zcopy": false, 00:11:39.828 "get_zone_info": false, 00:11:39.828 "zone_management": false, 00:11:39.828 "zone_append": false, 00:11:39.828 "compare": false, 00:11:39.828 "compare_and_write": false, 00:11:39.828 "abort": false, 00:11:39.828 "seek_hole": false, 00:11:39.828 "seek_data": false, 00:11:39.828 "copy": false, 00:11:39.828 "nvme_iov_md": false 00:11:39.828 }, 00:11:39.828 "memory_domains": [ 00:11:39.828 { 00:11:39.828 "dma_device_id": "system", 00:11:39.828 "dma_device_type": 1 00:11:39.828 }, 00:11:39.828 { 00:11:39.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.828 "dma_device_type": 2 00:11:39.828 }, 00:11:39.828 { 00:11:39.828 "dma_device_id": "system", 00:11:39.828 "dma_device_type": 1 00:11:39.828 }, 00:11:39.828 { 00:11:39.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.828 "dma_device_type": 2 00:11:39.828 }, 00:11:39.828 { 00:11:39.828 "dma_device_id": "system", 00:11:39.828 "dma_device_type": 1 00:11:39.828 }, 00:11:39.828 { 00:11:39.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.828 "dma_device_type": 2 00:11:39.828 }, 00:11:39.828 { 00:11:39.828 "dma_device_id": "system", 00:11:39.828 "dma_device_type": 1 00:11:39.828 }, 00:11:39.828 { 00:11:39.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.828 "dma_device_type": 2 00:11:39.828 } 00:11:39.828 ], 00:11:39.828 "driver_specific": { 00:11:39.828 "raid": { 00:11:39.828 "uuid": "0bf1138d-b174-4d17-a6f2-2959661b5844", 00:11:39.828 "strip_size_kb": 0, 00:11:39.828 "state": "online", 00:11:39.828 "raid_level": "raid1", 00:11:39.828 "superblock": false, 00:11:39.828 "num_base_bdevs": 4, 00:11:39.828 "num_base_bdevs_discovered": 4, 00:11:39.828 "num_base_bdevs_operational": 4, 00:11:39.828 "base_bdevs_list": [ 00:11:39.828 { 00:11:39.828 "name": "BaseBdev1", 00:11:39.828 "uuid": "abe3ab9d-5445-4ce3-8478-ab6f83feda86", 00:11:39.828 "is_configured": true, 00:11:39.828 "data_offset": 0, 00:11:39.828 "data_size": 65536 00:11:39.828 }, 00:11:39.828 { 00:11:39.828 "name": "BaseBdev2", 00:11:39.828 "uuid": "829f5671-b45c-434d-898a-f7af520649e0", 00:11:39.828 "is_configured": true, 00:11:39.828 "data_offset": 0, 00:11:39.828 "data_size": 65536 00:11:39.828 }, 00:11:39.828 { 00:11:39.828 "name": "BaseBdev3", 00:11:39.828 "uuid": "df90e545-73c7-4a67-992b-4f1cdec86a59", 00:11:39.828 "is_configured": true, 00:11:39.828 "data_offset": 0, 00:11:39.828 "data_size": 65536 00:11:39.828 }, 00:11:39.828 { 00:11:39.828 "name": "BaseBdev4", 00:11:39.828 "uuid": "8100b515-c762-4026-9420-204e98e69edd", 00:11:39.828 "is_configured": true, 00:11:39.828 "data_offset": 0, 00:11:39.828 "data_size": 65536 00:11:39.828 } 00:11:39.828 ] 00:11:39.828 } 00:11:39.828 } 00:11:39.828 }' 00:11:39.829 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.829 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:39.829 BaseBdev2 00:11:39.829 BaseBdev3 00:11:39.829 BaseBdev4' 00:11:39.829 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.829 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.829 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.089 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.089 [2024-10-05 08:48:16.513040] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.348 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.348 "name": "Existed_Raid", 00:11:40.348 "uuid": "0bf1138d-b174-4d17-a6f2-2959661b5844", 00:11:40.348 "strip_size_kb": 0, 00:11:40.348 "state": "online", 00:11:40.348 "raid_level": "raid1", 00:11:40.348 "superblock": false, 00:11:40.349 "num_base_bdevs": 4, 00:11:40.349 "num_base_bdevs_discovered": 3, 00:11:40.349 "num_base_bdevs_operational": 3, 00:11:40.349 "base_bdevs_list": [ 00:11:40.349 { 00:11:40.349 "name": null, 00:11:40.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.349 "is_configured": false, 00:11:40.349 "data_offset": 0, 00:11:40.349 "data_size": 65536 00:11:40.349 }, 00:11:40.349 { 00:11:40.349 "name": "BaseBdev2", 00:11:40.349 "uuid": "829f5671-b45c-434d-898a-f7af520649e0", 00:11:40.349 "is_configured": true, 00:11:40.349 "data_offset": 0, 00:11:40.349 "data_size": 65536 00:11:40.349 }, 00:11:40.349 { 00:11:40.349 "name": "BaseBdev3", 00:11:40.349 "uuid": "df90e545-73c7-4a67-992b-4f1cdec86a59", 00:11:40.349 "is_configured": true, 00:11:40.349 "data_offset": 0, 00:11:40.349 "data_size": 65536 00:11:40.349 }, 00:11:40.349 { 00:11:40.349 "name": "BaseBdev4", 00:11:40.349 "uuid": "8100b515-c762-4026-9420-204e98e69edd", 00:11:40.349 "is_configured": true, 00:11:40.349 "data_offset": 0, 00:11:40.349 "data_size": 65536 00:11:40.349 } 00:11:40.349 ] 00:11:40.349 }' 00:11:40.349 08:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.349 08:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.608 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:40.608 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.608 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.608 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:40.608 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.608 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.608 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.868 [2024-10-05 08:48:17.108321] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.868 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.868 [2024-10-05 08:48:17.249424] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.130 [2024-10-05 08:48:17.406077] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:41.130 [2024-10-05 08:48:17.406186] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.130 [2024-10-05 08:48:17.504681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.130 [2024-10-05 08:48:17.504849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.130 [2024-10-05 08:48:17.504870] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.130 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 BaseBdev2 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 [ 00:11:41.391 { 00:11:41.391 "name": "BaseBdev2", 00:11:41.391 "aliases": [ 00:11:41.391 "9b778ade-75ba-4071-95b6-d3af9b2a1082" 00:11:41.391 ], 00:11:41.391 "product_name": "Malloc disk", 00:11:41.391 "block_size": 512, 00:11:41.391 "num_blocks": 65536, 00:11:41.391 "uuid": "9b778ade-75ba-4071-95b6-d3af9b2a1082", 00:11:41.391 "assigned_rate_limits": { 00:11:41.391 "rw_ios_per_sec": 0, 00:11:41.391 "rw_mbytes_per_sec": 0, 00:11:41.391 "r_mbytes_per_sec": 0, 00:11:41.391 "w_mbytes_per_sec": 0 00:11:41.391 }, 00:11:41.391 "claimed": false, 00:11:41.391 "zoned": false, 00:11:41.391 "supported_io_types": { 00:11:41.391 "read": true, 00:11:41.391 "write": true, 00:11:41.391 "unmap": true, 00:11:41.391 "flush": true, 00:11:41.391 "reset": true, 00:11:41.391 "nvme_admin": false, 00:11:41.391 "nvme_io": false, 00:11:41.391 "nvme_io_md": false, 00:11:41.391 "write_zeroes": true, 00:11:41.391 "zcopy": true, 00:11:41.391 "get_zone_info": false, 00:11:41.391 "zone_management": false, 00:11:41.391 "zone_append": false, 00:11:41.391 "compare": false, 00:11:41.391 "compare_and_write": false, 00:11:41.391 "abort": true, 00:11:41.391 "seek_hole": false, 00:11:41.391 "seek_data": false, 00:11:41.391 "copy": true, 00:11:41.391 "nvme_iov_md": false 00:11:41.391 }, 00:11:41.391 "memory_domains": [ 00:11:41.391 { 00:11:41.391 "dma_device_id": "system", 00:11:41.391 "dma_device_type": 1 00:11:41.391 }, 00:11:41.391 { 00:11:41.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.391 "dma_device_type": 2 00:11:41.391 } 00:11:41.391 ], 00:11:41.391 "driver_specific": {} 00:11:41.391 } 00:11:41.391 ] 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 BaseBdev3 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 [ 00:11:41.391 { 00:11:41.391 "name": "BaseBdev3", 00:11:41.391 "aliases": [ 00:11:41.391 "01e64ebd-58fa-4dfc-b0ce-98df96a3cad3" 00:11:41.391 ], 00:11:41.391 "product_name": "Malloc disk", 00:11:41.391 "block_size": 512, 00:11:41.391 "num_blocks": 65536, 00:11:41.391 "uuid": "01e64ebd-58fa-4dfc-b0ce-98df96a3cad3", 00:11:41.391 "assigned_rate_limits": { 00:11:41.391 "rw_ios_per_sec": 0, 00:11:41.391 "rw_mbytes_per_sec": 0, 00:11:41.391 "r_mbytes_per_sec": 0, 00:11:41.391 "w_mbytes_per_sec": 0 00:11:41.391 }, 00:11:41.391 "claimed": false, 00:11:41.391 "zoned": false, 00:11:41.391 "supported_io_types": { 00:11:41.391 "read": true, 00:11:41.391 "write": true, 00:11:41.391 "unmap": true, 00:11:41.391 "flush": true, 00:11:41.391 "reset": true, 00:11:41.391 "nvme_admin": false, 00:11:41.391 "nvme_io": false, 00:11:41.391 "nvme_io_md": false, 00:11:41.391 "write_zeroes": true, 00:11:41.391 "zcopy": true, 00:11:41.391 "get_zone_info": false, 00:11:41.391 "zone_management": false, 00:11:41.391 "zone_append": false, 00:11:41.391 "compare": false, 00:11:41.391 "compare_and_write": false, 00:11:41.391 "abort": true, 00:11:41.391 "seek_hole": false, 00:11:41.391 "seek_data": false, 00:11:41.391 "copy": true, 00:11:41.391 "nvme_iov_md": false 00:11:41.391 }, 00:11:41.391 "memory_domains": [ 00:11:41.391 { 00:11:41.391 "dma_device_id": "system", 00:11:41.391 "dma_device_type": 1 00:11:41.391 }, 00:11:41.391 { 00:11:41.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.391 "dma_device_type": 2 00:11:41.391 } 00:11:41.391 ], 00:11:41.391 "driver_specific": {} 00:11:41.391 } 00:11:41.391 ] 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.391 BaseBdev4 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.391 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.392 [ 00:11:41.392 { 00:11:41.392 "name": "BaseBdev4", 00:11:41.392 "aliases": [ 00:11:41.392 "355cd546-2a21-4c0a-896a-fe1893a8a8e5" 00:11:41.392 ], 00:11:41.392 "product_name": "Malloc disk", 00:11:41.392 "block_size": 512, 00:11:41.392 "num_blocks": 65536, 00:11:41.392 "uuid": "355cd546-2a21-4c0a-896a-fe1893a8a8e5", 00:11:41.392 "assigned_rate_limits": { 00:11:41.392 "rw_ios_per_sec": 0, 00:11:41.392 "rw_mbytes_per_sec": 0, 00:11:41.392 "r_mbytes_per_sec": 0, 00:11:41.392 "w_mbytes_per_sec": 0 00:11:41.392 }, 00:11:41.392 "claimed": false, 00:11:41.392 "zoned": false, 00:11:41.392 "supported_io_types": { 00:11:41.392 "read": true, 00:11:41.392 "write": true, 00:11:41.392 "unmap": true, 00:11:41.392 "flush": true, 00:11:41.392 "reset": true, 00:11:41.392 "nvme_admin": false, 00:11:41.392 "nvme_io": false, 00:11:41.392 "nvme_io_md": false, 00:11:41.392 "write_zeroes": true, 00:11:41.392 "zcopy": true, 00:11:41.392 "get_zone_info": false, 00:11:41.392 "zone_management": false, 00:11:41.392 "zone_append": false, 00:11:41.392 "compare": false, 00:11:41.392 "compare_and_write": false, 00:11:41.392 "abort": true, 00:11:41.392 "seek_hole": false, 00:11:41.392 "seek_data": false, 00:11:41.392 "copy": true, 00:11:41.392 "nvme_iov_md": false 00:11:41.392 }, 00:11:41.392 "memory_domains": [ 00:11:41.392 { 00:11:41.392 "dma_device_id": "system", 00:11:41.392 "dma_device_type": 1 00:11:41.392 }, 00:11:41.392 { 00:11:41.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.392 "dma_device_type": 2 00:11:41.392 } 00:11:41.392 ], 00:11:41.392 "driver_specific": {} 00:11:41.392 } 00:11:41.392 ] 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.392 [2024-10-05 08:48:17.815111] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.392 [2024-10-05 08:48:17.815246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.392 [2024-10-05 08:48:17.815291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.392 [2024-10-05 08:48:17.817454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:41.392 [2024-10-05 08:48:17.817554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.392 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.652 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.652 "name": "Existed_Raid", 00:11:41.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.652 "strip_size_kb": 0, 00:11:41.652 "state": "configuring", 00:11:41.652 "raid_level": "raid1", 00:11:41.652 "superblock": false, 00:11:41.652 "num_base_bdevs": 4, 00:11:41.652 "num_base_bdevs_discovered": 3, 00:11:41.652 "num_base_bdevs_operational": 4, 00:11:41.652 "base_bdevs_list": [ 00:11:41.652 { 00:11:41.652 "name": "BaseBdev1", 00:11:41.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.652 "is_configured": false, 00:11:41.652 "data_offset": 0, 00:11:41.652 "data_size": 0 00:11:41.652 }, 00:11:41.652 { 00:11:41.652 "name": "BaseBdev2", 00:11:41.652 "uuid": "9b778ade-75ba-4071-95b6-d3af9b2a1082", 00:11:41.652 "is_configured": true, 00:11:41.652 "data_offset": 0, 00:11:41.652 "data_size": 65536 00:11:41.652 }, 00:11:41.652 { 00:11:41.652 "name": "BaseBdev3", 00:11:41.652 "uuid": "01e64ebd-58fa-4dfc-b0ce-98df96a3cad3", 00:11:41.652 "is_configured": true, 00:11:41.652 "data_offset": 0, 00:11:41.652 "data_size": 65536 00:11:41.652 }, 00:11:41.652 { 00:11:41.652 "name": "BaseBdev4", 00:11:41.652 "uuid": "355cd546-2a21-4c0a-896a-fe1893a8a8e5", 00:11:41.652 "is_configured": true, 00:11:41.652 "data_offset": 0, 00:11:41.652 "data_size": 65536 00:11:41.652 } 00:11:41.652 ] 00:11:41.652 }' 00:11:41.652 08:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.652 08:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.913 [2024-10-05 08:48:18.222418] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.913 "name": "Existed_Raid", 00:11:41.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.913 "strip_size_kb": 0, 00:11:41.913 "state": "configuring", 00:11:41.913 "raid_level": "raid1", 00:11:41.913 "superblock": false, 00:11:41.913 "num_base_bdevs": 4, 00:11:41.913 "num_base_bdevs_discovered": 2, 00:11:41.913 "num_base_bdevs_operational": 4, 00:11:41.913 "base_bdevs_list": [ 00:11:41.913 { 00:11:41.913 "name": "BaseBdev1", 00:11:41.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.913 "is_configured": false, 00:11:41.913 "data_offset": 0, 00:11:41.913 "data_size": 0 00:11:41.913 }, 00:11:41.913 { 00:11:41.913 "name": null, 00:11:41.913 "uuid": "9b778ade-75ba-4071-95b6-d3af9b2a1082", 00:11:41.913 "is_configured": false, 00:11:41.913 "data_offset": 0, 00:11:41.913 "data_size": 65536 00:11:41.913 }, 00:11:41.913 { 00:11:41.913 "name": "BaseBdev3", 00:11:41.913 "uuid": "01e64ebd-58fa-4dfc-b0ce-98df96a3cad3", 00:11:41.913 "is_configured": true, 00:11:41.913 "data_offset": 0, 00:11:41.913 "data_size": 65536 00:11:41.913 }, 00:11:41.913 { 00:11:41.913 "name": "BaseBdev4", 00:11:41.913 "uuid": "355cd546-2a21-4c0a-896a-fe1893a8a8e5", 00:11:41.913 "is_configured": true, 00:11:41.913 "data_offset": 0, 00:11:41.913 "data_size": 65536 00:11:41.913 } 00:11:41.913 ] 00:11:41.913 }' 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.913 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.482 [2024-10-05 08:48:18.743002] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.482 BaseBdev1 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.482 [ 00:11:42.482 { 00:11:42.482 "name": "BaseBdev1", 00:11:42.482 "aliases": [ 00:11:42.482 "d06d998a-e599-42d0-9854-7d05315b2b18" 00:11:42.482 ], 00:11:42.482 "product_name": "Malloc disk", 00:11:42.482 "block_size": 512, 00:11:42.482 "num_blocks": 65536, 00:11:42.482 "uuid": "d06d998a-e599-42d0-9854-7d05315b2b18", 00:11:42.482 "assigned_rate_limits": { 00:11:42.482 "rw_ios_per_sec": 0, 00:11:42.482 "rw_mbytes_per_sec": 0, 00:11:42.482 "r_mbytes_per_sec": 0, 00:11:42.482 "w_mbytes_per_sec": 0 00:11:42.482 }, 00:11:42.482 "claimed": true, 00:11:42.482 "claim_type": "exclusive_write", 00:11:42.482 "zoned": false, 00:11:42.482 "supported_io_types": { 00:11:42.482 "read": true, 00:11:42.482 "write": true, 00:11:42.482 "unmap": true, 00:11:42.482 "flush": true, 00:11:42.482 "reset": true, 00:11:42.482 "nvme_admin": false, 00:11:42.482 "nvme_io": false, 00:11:42.482 "nvme_io_md": false, 00:11:42.482 "write_zeroes": true, 00:11:42.482 "zcopy": true, 00:11:42.482 "get_zone_info": false, 00:11:42.482 "zone_management": false, 00:11:42.482 "zone_append": false, 00:11:42.482 "compare": false, 00:11:42.482 "compare_and_write": false, 00:11:42.482 "abort": true, 00:11:42.482 "seek_hole": false, 00:11:42.482 "seek_data": false, 00:11:42.482 "copy": true, 00:11:42.482 "nvme_iov_md": false 00:11:42.482 }, 00:11:42.482 "memory_domains": [ 00:11:42.482 { 00:11:42.482 "dma_device_id": "system", 00:11:42.482 "dma_device_type": 1 00:11:42.482 }, 00:11:42.482 { 00:11:42.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.482 "dma_device_type": 2 00:11:42.482 } 00:11:42.482 ], 00:11:42.482 "driver_specific": {} 00:11:42.482 } 00:11:42.482 ] 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.482 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.483 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.483 "name": "Existed_Raid", 00:11:42.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.483 "strip_size_kb": 0, 00:11:42.483 "state": "configuring", 00:11:42.483 "raid_level": "raid1", 00:11:42.483 "superblock": false, 00:11:42.483 "num_base_bdevs": 4, 00:11:42.483 "num_base_bdevs_discovered": 3, 00:11:42.483 "num_base_bdevs_operational": 4, 00:11:42.483 "base_bdevs_list": [ 00:11:42.483 { 00:11:42.483 "name": "BaseBdev1", 00:11:42.483 "uuid": "d06d998a-e599-42d0-9854-7d05315b2b18", 00:11:42.483 "is_configured": true, 00:11:42.483 "data_offset": 0, 00:11:42.483 "data_size": 65536 00:11:42.483 }, 00:11:42.483 { 00:11:42.483 "name": null, 00:11:42.483 "uuid": "9b778ade-75ba-4071-95b6-d3af9b2a1082", 00:11:42.483 "is_configured": false, 00:11:42.483 "data_offset": 0, 00:11:42.483 "data_size": 65536 00:11:42.483 }, 00:11:42.483 { 00:11:42.483 "name": "BaseBdev3", 00:11:42.483 "uuid": "01e64ebd-58fa-4dfc-b0ce-98df96a3cad3", 00:11:42.483 "is_configured": true, 00:11:42.483 "data_offset": 0, 00:11:42.483 "data_size": 65536 00:11:42.483 }, 00:11:42.483 { 00:11:42.483 "name": "BaseBdev4", 00:11:42.483 "uuid": "355cd546-2a21-4c0a-896a-fe1893a8a8e5", 00:11:42.483 "is_configured": true, 00:11:42.483 "data_offset": 0, 00:11:42.483 "data_size": 65536 00:11:42.483 } 00:11:42.483 ] 00:11:42.483 }' 00:11:42.483 08:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.483 08:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.743 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.743 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:42.743 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.743 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.743 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.743 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:42.743 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:42.743 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.743 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.004 [2024-10-05 08:48:19.218248] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.004 "name": "Existed_Raid", 00:11:43.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.004 "strip_size_kb": 0, 00:11:43.004 "state": "configuring", 00:11:43.004 "raid_level": "raid1", 00:11:43.004 "superblock": false, 00:11:43.004 "num_base_bdevs": 4, 00:11:43.004 "num_base_bdevs_discovered": 2, 00:11:43.004 "num_base_bdevs_operational": 4, 00:11:43.004 "base_bdevs_list": [ 00:11:43.004 { 00:11:43.004 "name": "BaseBdev1", 00:11:43.004 "uuid": "d06d998a-e599-42d0-9854-7d05315b2b18", 00:11:43.004 "is_configured": true, 00:11:43.004 "data_offset": 0, 00:11:43.004 "data_size": 65536 00:11:43.004 }, 00:11:43.004 { 00:11:43.004 "name": null, 00:11:43.004 "uuid": "9b778ade-75ba-4071-95b6-d3af9b2a1082", 00:11:43.004 "is_configured": false, 00:11:43.004 "data_offset": 0, 00:11:43.004 "data_size": 65536 00:11:43.004 }, 00:11:43.004 { 00:11:43.004 "name": null, 00:11:43.004 "uuid": "01e64ebd-58fa-4dfc-b0ce-98df96a3cad3", 00:11:43.004 "is_configured": false, 00:11:43.004 "data_offset": 0, 00:11:43.004 "data_size": 65536 00:11:43.004 }, 00:11:43.004 { 00:11:43.004 "name": "BaseBdev4", 00:11:43.004 "uuid": "355cd546-2a21-4c0a-896a-fe1893a8a8e5", 00:11:43.004 "is_configured": true, 00:11:43.004 "data_offset": 0, 00:11:43.004 "data_size": 65536 00:11:43.004 } 00:11:43.004 ] 00:11:43.004 }' 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.004 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.264 [2024-10-05 08:48:19.645502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.264 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.264 "name": "Existed_Raid", 00:11:43.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.264 "strip_size_kb": 0, 00:11:43.264 "state": "configuring", 00:11:43.264 "raid_level": "raid1", 00:11:43.264 "superblock": false, 00:11:43.265 "num_base_bdevs": 4, 00:11:43.265 "num_base_bdevs_discovered": 3, 00:11:43.265 "num_base_bdevs_operational": 4, 00:11:43.265 "base_bdevs_list": [ 00:11:43.265 { 00:11:43.265 "name": "BaseBdev1", 00:11:43.265 "uuid": "d06d998a-e599-42d0-9854-7d05315b2b18", 00:11:43.265 "is_configured": true, 00:11:43.265 "data_offset": 0, 00:11:43.265 "data_size": 65536 00:11:43.265 }, 00:11:43.265 { 00:11:43.265 "name": null, 00:11:43.265 "uuid": "9b778ade-75ba-4071-95b6-d3af9b2a1082", 00:11:43.265 "is_configured": false, 00:11:43.265 "data_offset": 0, 00:11:43.265 "data_size": 65536 00:11:43.265 }, 00:11:43.265 { 00:11:43.265 "name": "BaseBdev3", 00:11:43.265 "uuid": "01e64ebd-58fa-4dfc-b0ce-98df96a3cad3", 00:11:43.265 "is_configured": true, 00:11:43.265 "data_offset": 0, 00:11:43.265 "data_size": 65536 00:11:43.265 }, 00:11:43.265 { 00:11:43.265 "name": "BaseBdev4", 00:11:43.265 "uuid": "355cd546-2a21-4c0a-896a-fe1893a8a8e5", 00:11:43.265 "is_configured": true, 00:11:43.265 "data_offset": 0, 00:11:43.265 "data_size": 65536 00:11:43.265 } 00:11:43.265 ] 00:11:43.265 }' 00:11:43.265 08:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.265 08:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.835 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.835 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.835 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.835 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:43.835 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.835 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:43.835 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:43.835 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.835 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.835 [2024-10-05 08:48:20.104719] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.836 "name": "Existed_Raid", 00:11:43.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.836 "strip_size_kb": 0, 00:11:43.836 "state": "configuring", 00:11:43.836 "raid_level": "raid1", 00:11:43.836 "superblock": false, 00:11:43.836 "num_base_bdevs": 4, 00:11:43.836 "num_base_bdevs_discovered": 2, 00:11:43.836 "num_base_bdevs_operational": 4, 00:11:43.836 "base_bdevs_list": [ 00:11:43.836 { 00:11:43.836 "name": null, 00:11:43.836 "uuid": "d06d998a-e599-42d0-9854-7d05315b2b18", 00:11:43.836 "is_configured": false, 00:11:43.836 "data_offset": 0, 00:11:43.836 "data_size": 65536 00:11:43.836 }, 00:11:43.836 { 00:11:43.836 "name": null, 00:11:43.836 "uuid": "9b778ade-75ba-4071-95b6-d3af9b2a1082", 00:11:43.836 "is_configured": false, 00:11:43.836 "data_offset": 0, 00:11:43.836 "data_size": 65536 00:11:43.836 }, 00:11:43.836 { 00:11:43.836 "name": "BaseBdev3", 00:11:43.836 "uuid": "01e64ebd-58fa-4dfc-b0ce-98df96a3cad3", 00:11:43.836 "is_configured": true, 00:11:43.836 "data_offset": 0, 00:11:43.836 "data_size": 65536 00:11:43.836 }, 00:11:43.836 { 00:11:43.836 "name": "BaseBdev4", 00:11:43.836 "uuid": "355cd546-2a21-4c0a-896a-fe1893a8a8e5", 00:11:43.836 "is_configured": true, 00:11:43.836 "data_offset": 0, 00:11:43.836 "data_size": 65536 00:11:43.836 } 00:11:43.836 ] 00:11:43.836 }' 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.836 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.406 [2024-10-05 08:48:20.648408] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.406 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.406 "name": "Existed_Raid", 00:11:44.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.406 "strip_size_kb": 0, 00:11:44.406 "state": "configuring", 00:11:44.406 "raid_level": "raid1", 00:11:44.406 "superblock": false, 00:11:44.406 "num_base_bdevs": 4, 00:11:44.406 "num_base_bdevs_discovered": 3, 00:11:44.406 "num_base_bdevs_operational": 4, 00:11:44.406 "base_bdevs_list": [ 00:11:44.406 { 00:11:44.406 "name": null, 00:11:44.406 "uuid": "d06d998a-e599-42d0-9854-7d05315b2b18", 00:11:44.406 "is_configured": false, 00:11:44.406 "data_offset": 0, 00:11:44.406 "data_size": 65536 00:11:44.406 }, 00:11:44.406 { 00:11:44.406 "name": "BaseBdev2", 00:11:44.406 "uuid": "9b778ade-75ba-4071-95b6-d3af9b2a1082", 00:11:44.406 "is_configured": true, 00:11:44.406 "data_offset": 0, 00:11:44.407 "data_size": 65536 00:11:44.407 }, 00:11:44.407 { 00:11:44.407 "name": "BaseBdev3", 00:11:44.407 "uuid": "01e64ebd-58fa-4dfc-b0ce-98df96a3cad3", 00:11:44.407 "is_configured": true, 00:11:44.407 "data_offset": 0, 00:11:44.407 "data_size": 65536 00:11:44.407 }, 00:11:44.407 { 00:11:44.407 "name": "BaseBdev4", 00:11:44.407 "uuid": "355cd546-2a21-4c0a-896a-fe1893a8a8e5", 00:11:44.407 "is_configured": true, 00:11:44.407 "data_offset": 0, 00:11:44.407 "data_size": 65536 00:11:44.407 } 00:11:44.407 ] 00:11:44.407 }' 00:11:44.407 08:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.407 08:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.666 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:44.666 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.666 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.666 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.666 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.666 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:44.666 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:44.666 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.666 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.666 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d06d998a-e599-42d0-9854-7d05315b2b18 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.926 [2024-10-05 08:48:21.221860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:44.926 [2024-10-05 08:48:21.222021] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:44.926 [2024-10-05 08:48:21.222057] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:44.926 [2024-10-05 08:48:21.222416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:44.926 [2024-10-05 08:48:21.222642] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:44.926 [2024-10-05 08:48:21.222685] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:44.926 [2024-10-05 08:48:21.223029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.926 NewBaseBdev 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.926 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.926 [ 00:11:44.926 { 00:11:44.926 "name": "NewBaseBdev", 00:11:44.926 "aliases": [ 00:11:44.926 "d06d998a-e599-42d0-9854-7d05315b2b18" 00:11:44.926 ], 00:11:44.926 "product_name": "Malloc disk", 00:11:44.926 "block_size": 512, 00:11:44.926 "num_blocks": 65536, 00:11:44.926 "uuid": "d06d998a-e599-42d0-9854-7d05315b2b18", 00:11:44.926 "assigned_rate_limits": { 00:11:44.926 "rw_ios_per_sec": 0, 00:11:44.926 "rw_mbytes_per_sec": 0, 00:11:44.926 "r_mbytes_per_sec": 0, 00:11:44.926 "w_mbytes_per_sec": 0 00:11:44.926 }, 00:11:44.926 "claimed": true, 00:11:44.926 "claim_type": "exclusive_write", 00:11:44.926 "zoned": false, 00:11:44.926 "supported_io_types": { 00:11:44.926 "read": true, 00:11:44.926 "write": true, 00:11:44.926 "unmap": true, 00:11:44.926 "flush": true, 00:11:44.926 "reset": true, 00:11:44.926 "nvme_admin": false, 00:11:44.926 "nvme_io": false, 00:11:44.926 "nvme_io_md": false, 00:11:44.926 "write_zeroes": true, 00:11:44.926 "zcopy": true, 00:11:44.927 "get_zone_info": false, 00:11:44.927 "zone_management": false, 00:11:44.927 "zone_append": false, 00:11:44.927 "compare": false, 00:11:44.927 "compare_and_write": false, 00:11:44.927 "abort": true, 00:11:44.927 "seek_hole": false, 00:11:44.927 "seek_data": false, 00:11:44.927 "copy": true, 00:11:44.927 "nvme_iov_md": false 00:11:44.927 }, 00:11:44.927 "memory_domains": [ 00:11:44.927 { 00:11:44.927 "dma_device_id": "system", 00:11:44.927 "dma_device_type": 1 00:11:44.927 }, 00:11:44.927 { 00:11:44.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.927 "dma_device_type": 2 00:11:44.927 } 00:11:44.927 ], 00:11:44.927 "driver_specific": {} 00:11:44.927 } 00:11:44.927 ] 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.927 "name": "Existed_Raid", 00:11:44.927 "uuid": "401e75e0-72de-4806-96c7-2e7459afff79", 00:11:44.927 "strip_size_kb": 0, 00:11:44.927 "state": "online", 00:11:44.927 "raid_level": "raid1", 00:11:44.927 "superblock": false, 00:11:44.927 "num_base_bdevs": 4, 00:11:44.927 "num_base_bdevs_discovered": 4, 00:11:44.927 "num_base_bdevs_operational": 4, 00:11:44.927 "base_bdevs_list": [ 00:11:44.927 { 00:11:44.927 "name": "NewBaseBdev", 00:11:44.927 "uuid": "d06d998a-e599-42d0-9854-7d05315b2b18", 00:11:44.927 "is_configured": true, 00:11:44.927 "data_offset": 0, 00:11:44.927 "data_size": 65536 00:11:44.927 }, 00:11:44.927 { 00:11:44.927 "name": "BaseBdev2", 00:11:44.927 "uuid": "9b778ade-75ba-4071-95b6-d3af9b2a1082", 00:11:44.927 "is_configured": true, 00:11:44.927 "data_offset": 0, 00:11:44.927 "data_size": 65536 00:11:44.927 }, 00:11:44.927 { 00:11:44.927 "name": "BaseBdev3", 00:11:44.927 "uuid": "01e64ebd-58fa-4dfc-b0ce-98df96a3cad3", 00:11:44.927 "is_configured": true, 00:11:44.927 "data_offset": 0, 00:11:44.927 "data_size": 65536 00:11:44.927 }, 00:11:44.927 { 00:11:44.927 "name": "BaseBdev4", 00:11:44.927 "uuid": "355cd546-2a21-4c0a-896a-fe1893a8a8e5", 00:11:44.927 "is_configured": true, 00:11:44.927 "data_offset": 0, 00:11:44.927 "data_size": 65536 00:11:44.927 } 00:11:44.927 ] 00:11:44.927 }' 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.927 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.497 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:45.497 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:45.497 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.497 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.497 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.497 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.497 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:45.497 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.497 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.497 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.497 [2024-10-05 08:48:21.709372] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.497 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.497 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.497 "name": "Existed_Raid", 00:11:45.497 "aliases": [ 00:11:45.497 "401e75e0-72de-4806-96c7-2e7459afff79" 00:11:45.497 ], 00:11:45.497 "product_name": "Raid Volume", 00:11:45.497 "block_size": 512, 00:11:45.497 "num_blocks": 65536, 00:11:45.497 "uuid": "401e75e0-72de-4806-96c7-2e7459afff79", 00:11:45.497 "assigned_rate_limits": { 00:11:45.498 "rw_ios_per_sec": 0, 00:11:45.498 "rw_mbytes_per_sec": 0, 00:11:45.498 "r_mbytes_per_sec": 0, 00:11:45.498 "w_mbytes_per_sec": 0 00:11:45.498 }, 00:11:45.498 "claimed": false, 00:11:45.498 "zoned": false, 00:11:45.498 "supported_io_types": { 00:11:45.498 "read": true, 00:11:45.498 "write": true, 00:11:45.498 "unmap": false, 00:11:45.498 "flush": false, 00:11:45.498 "reset": true, 00:11:45.498 "nvme_admin": false, 00:11:45.498 "nvme_io": false, 00:11:45.498 "nvme_io_md": false, 00:11:45.498 "write_zeroes": true, 00:11:45.498 "zcopy": false, 00:11:45.498 "get_zone_info": false, 00:11:45.498 "zone_management": false, 00:11:45.498 "zone_append": false, 00:11:45.498 "compare": false, 00:11:45.498 "compare_and_write": false, 00:11:45.498 "abort": false, 00:11:45.498 "seek_hole": false, 00:11:45.498 "seek_data": false, 00:11:45.498 "copy": false, 00:11:45.498 "nvme_iov_md": false 00:11:45.498 }, 00:11:45.498 "memory_domains": [ 00:11:45.498 { 00:11:45.498 "dma_device_id": "system", 00:11:45.498 "dma_device_type": 1 00:11:45.498 }, 00:11:45.498 { 00:11:45.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.498 "dma_device_type": 2 00:11:45.498 }, 00:11:45.498 { 00:11:45.498 "dma_device_id": "system", 00:11:45.498 "dma_device_type": 1 00:11:45.498 }, 00:11:45.498 { 00:11:45.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.498 "dma_device_type": 2 00:11:45.498 }, 00:11:45.498 { 00:11:45.498 "dma_device_id": "system", 00:11:45.498 "dma_device_type": 1 00:11:45.498 }, 00:11:45.498 { 00:11:45.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.498 "dma_device_type": 2 00:11:45.498 }, 00:11:45.498 { 00:11:45.498 "dma_device_id": "system", 00:11:45.498 "dma_device_type": 1 00:11:45.498 }, 00:11:45.498 { 00:11:45.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.498 "dma_device_type": 2 00:11:45.498 } 00:11:45.498 ], 00:11:45.498 "driver_specific": { 00:11:45.498 "raid": { 00:11:45.498 "uuid": "401e75e0-72de-4806-96c7-2e7459afff79", 00:11:45.498 "strip_size_kb": 0, 00:11:45.498 "state": "online", 00:11:45.498 "raid_level": "raid1", 00:11:45.498 "superblock": false, 00:11:45.498 "num_base_bdevs": 4, 00:11:45.498 "num_base_bdevs_discovered": 4, 00:11:45.498 "num_base_bdevs_operational": 4, 00:11:45.498 "base_bdevs_list": [ 00:11:45.498 { 00:11:45.498 "name": "NewBaseBdev", 00:11:45.498 "uuid": "d06d998a-e599-42d0-9854-7d05315b2b18", 00:11:45.498 "is_configured": true, 00:11:45.498 "data_offset": 0, 00:11:45.498 "data_size": 65536 00:11:45.498 }, 00:11:45.498 { 00:11:45.498 "name": "BaseBdev2", 00:11:45.498 "uuid": "9b778ade-75ba-4071-95b6-d3af9b2a1082", 00:11:45.498 "is_configured": true, 00:11:45.498 "data_offset": 0, 00:11:45.498 "data_size": 65536 00:11:45.498 }, 00:11:45.498 { 00:11:45.498 "name": "BaseBdev3", 00:11:45.498 "uuid": "01e64ebd-58fa-4dfc-b0ce-98df96a3cad3", 00:11:45.498 "is_configured": true, 00:11:45.498 "data_offset": 0, 00:11:45.498 "data_size": 65536 00:11:45.498 }, 00:11:45.498 { 00:11:45.498 "name": "BaseBdev4", 00:11:45.498 "uuid": "355cd546-2a21-4c0a-896a-fe1893a8a8e5", 00:11:45.498 "is_configured": true, 00:11:45.498 "data_offset": 0, 00:11:45.498 "data_size": 65536 00:11:45.498 } 00:11:45.498 ] 00:11:45.498 } 00:11:45.498 } 00:11:45.498 }' 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:45.498 BaseBdev2 00:11:45.498 BaseBdev3 00:11:45.498 BaseBdev4' 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.498 08:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.758 08:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.758 08:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.758 08:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.758 08:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.758 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.758 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.758 [2024-10-05 08:48:22.012536] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.758 [2024-10-05 08:48:22.012564] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.758 [2024-10-05 08:48:22.012640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.758 [2024-10-05 08:48:22.012971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.758 [2024-10-05 08:48:22.012987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:45.758 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.758 08:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71572 00:11:45.758 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71572 ']' 00:11:45.758 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71572 00:11:45.758 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:45.759 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:45.759 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71572 00:11:45.759 killing process with pid 71572 00:11:45.759 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:45.759 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:45.759 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71572' 00:11:45.759 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71572 00:11:45.759 [2024-10-05 08:48:22.043372] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:45.759 08:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71572 00:11:46.023 [2024-10-05 08:48:22.451189] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:47.403 00:11:47.403 real 0m11.275s 00:11:47.403 user 0m17.404s 00:11:47.403 sys 0m2.183s 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:47.403 ************************************ 00:11:47.403 END TEST raid_state_function_test 00:11:47.403 ************************************ 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.403 08:48:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:47.403 08:48:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:47.403 08:48:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:47.403 08:48:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.403 ************************************ 00:11:47.403 START TEST raid_state_function_test_sb 00:11:47.403 ************************************ 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:47.403 Process raid pid: 72176 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72176 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72176' 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72176 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72176 ']' 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.403 08:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.663 [2024-10-05 08:48:23.956786] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:47.663 [2024-10-05 08:48:23.957005] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.663 [2024-10-05 08:48:24.125811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.924 [2024-10-05 08:48:24.381630] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.184 [2024-10-05 08:48:24.605898] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.184 [2024-10-05 08:48:24.606047] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.444 08:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:48.444 08:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:48.444 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:48.444 08:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.445 [2024-10-05 08:48:24.783994] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:48.445 [2024-10-05 08:48:24.784142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:48.445 [2024-10-05 08:48:24.784157] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.445 [2024-10-05 08:48:24.784168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.445 [2024-10-05 08:48:24.784174] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:48.445 [2024-10-05 08:48:24.784184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:48.445 [2024-10-05 08:48:24.784190] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:48.445 [2024-10-05 08:48:24.784198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.445 "name": "Existed_Raid", 00:11:48.445 "uuid": "e3903a9c-711a-4b58-a497-2f7a60533fe9", 00:11:48.445 "strip_size_kb": 0, 00:11:48.445 "state": "configuring", 00:11:48.445 "raid_level": "raid1", 00:11:48.445 "superblock": true, 00:11:48.445 "num_base_bdevs": 4, 00:11:48.445 "num_base_bdevs_discovered": 0, 00:11:48.445 "num_base_bdevs_operational": 4, 00:11:48.445 "base_bdevs_list": [ 00:11:48.445 { 00:11:48.445 "name": "BaseBdev1", 00:11:48.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.445 "is_configured": false, 00:11:48.445 "data_offset": 0, 00:11:48.445 "data_size": 0 00:11:48.445 }, 00:11:48.445 { 00:11:48.445 "name": "BaseBdev2", 00:11:48.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.445 "is_configured": false, 00:11:48.445 "data_offset": 0, 00:11:48.445 "data_size": 0 00:11:48.445 }, 00:11:48.445 { 00:11:48.445 "name": "BaseBdev3", 00:11:48.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.445 "is_configured": false, 00:11:48.445 "data_offset": 0, 00:11:48.445 "data_size": 0 00:11:48.445 }, 00:11:48.445 { 00:11:48.445 "name": "BaseBdev4", 00:11:48.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.445 "is_configured": false, 00:11:48.445 "data_offset": 0, 00:11:48.445 "data_size": 0 00:11:48.445 } 00:11:48.445 ] 00:11:48.445 }' 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.445 08:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.016 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:49.016 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.016 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.016 [2024-10-05 08:48:25.231108] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.016 [2024-10-05 08:48:25.231240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:49.016 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.016 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:49.016 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.016 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.016 [2024-10-05 08:48:25.243120] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.016 [2024-10-05 08:48:25.243205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.016 [2024-10-05 08:48:25.243232] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.016 [2024-10-05 08:48:25.243254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.016 [2024-10-05 08:48:25.243272] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.016 [2024-10-05 08:48:25.243293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.016 [2024-10-05 08:48:25.243311] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:49.016 [2024-10-05 08:48:25.243331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:49.016 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.016 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.017 [2024-10-05 08:48:25.328970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.017 BaseBdev1 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.017 [ 00:11:49.017 { 00:11:49.017 "name": "BaseBdev1", 00:11:49.017 "aliases": [ 00:11:49.017 "a7e3db74-f952-47ed-8d28-d229ed2bfe8c" 00:11:49.017 ], 00:11:49.017 "product_name": "Malloc disk", 00:11:49.017 "block_size": 512, 00:11:49.017 "num_blocks": 65536, 00:11:49.017 "uuid": "a7e3db74-f952-47ed-8d28-d229ed2bfe8c", 00:11:49.017 "assigned_rate_limits": { 00:11:49.017 "rw_ios_per_sec": 0, 00:11:49.017 "rw_mbytes_per_sec": 0, 00:11:49.017 "r_mbytes_per_sec": 0, 00:11:49.017 "w_mbytes_per_sec": 0 00:11:49.017 }, 00:11:49.017 "claimed": true, 00:11:49.017 "claim_type": "exclusive_write", 00:11:49.017 "zoned": false, 00:11:49.017 "supported_io_types": { 00:11:49.017 "read": true, 00:11:49.017 "write": true, 00:11:49.017 "unmap": true, 00:11:49.017 "flush": true, 00:11:49.017 "reset": true, 00:11:49.017 "nvme_admin": false, 00:11:49.017 "nvme_io": false, 00:11:49.017 "nvme_io_md": false, 00:11:49.017 "write_zeroes": true, 00:11:49.017 "zcopy": true, 00:11:49.017 "get_zone_info": false, 00:11:49.017 "zone_management": false, 00:11:49.017 "zone_append": false, 00:11:49.017 "compare": false, 00:11:49.017 "compare_and_write": false, 00:11:49.017 "abort": true, 00:11:49.017 "seek_hole": false, 00:11:49.017 "seek_data": false, 00:11:49.017 "copy": true, 00:11:49.017 "nvme_iov_md": false 00:11:49.017 }, 00:11:49.017 "memory_domains": [ 00:11:49.017 { 00:11:49.017 "dma_device_id": "system", 00:11:49.017 "dma_device_type": 1 00:11:49.017 }, 00:11:49.017 { 00:11:49.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.017 "dma_device_type": 2 00:11:49.017 } 00:11:49.017 ], 00:11:49.017 "driver_specific": {} 00:11:49.017 } 00:11:49.017 ] 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.017 "name": "Existed_Raid", 00:11:49.017 "uuid": "7c674e7f-8198-4178-be74-d55dc16589f7", 00:11:49.017 "strip_size_kb": 0, 00:11:49.017 "state": "configuring", 00:11:49.017 "raid_level": "raid1", 00:11:49.017 "superblock": true, 00:11:49.017 "num_base_bdevs": 4, 00:11:49.017 "num_base_bdevs_discovered": 1, 00:11:49.017 "num_base_bdevs_operational": 4, 00:11:49.017 "base_bdevs_list": [ 00:11:49.017 { 00:11:49.017 "name": "BaseBdev1", 00:11:49.017 "uuid": "a7e3db74-f952-47ed-8d28-d229ed2bfe8c", 00:11:49.017 "is_configured": true, 00:11:49.017 "data_offset": 2048, 00:11:49.017 "data_size": 63488 00:11:49.017 }, 00:11:49.017 { 00:11:49.017 "name": "BaseBdev2", 00:11:49.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.017 "is_configured": false, 00:11:49.017 "data_offset": 0, 00:11:49.017 "data_size": 0 00:11:49.017 }, 00:11:49.017 { 00:11:49.017 "name": "BaseBdev3", 00:11:49.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.017 "is_configured": false, 00:11:49.017 "data_offset": 0, 00:11:49.017 "data_size": 0 00:11:49.017 }, 00:11:49.017 { 00:11:49.017 "name": "BaseBdev4", 00:11:49.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.017 "is_configured": false, 00:11:49.017 "data_offset": 0, 00:11:49.017 "data_size": 0 00:11:49.017 } 00:11:49.017 ] 00:11:49.017 }' 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.017 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.631 [2024-10-05 08:48:25.800185] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.631 [2024-10-05 08:48:25.800246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.631 [2024-10-05 08:48:25.812212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.631 [2024-10-05 08:48:25.814287] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.631 [2024-10-05 08:48:25.814379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.631 [2024-10-05 08:48:25.814394] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.631 [2024-10-05 08:48:25.814406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.631 [2024-10-05 08:48:25.814414] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:49.631 [2024-10-05 08:48:25.814422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.631 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.632 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.632 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.632 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.632 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.632 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.632 "name": "Existed_Raid", 00:11:49.632 "uuid": "cb201fbb-25f1-4b6c-b6d5-80faae8e1a89", 00:11:49.632 "strip_size_kb": 0, 00:11:49.632 "state": "configuring", 00:11:49.632 "raid_level": "raid1", 00:11:49.632 "superblock": true, 00:11:49.632 "num_base_bdevs": 4, 00:11:49.632 "num_base_bdevs_discovered": 1, 00:11:49.632 "num_base_bdevs_operational": 4, 00:11:49.632 "base_bdevs_list": [ 00:11:49.632 { 00:11:49.632 "name": "BaseBdev1", 00:11:49.632 "uuid": "a7e3db74-f952-47ed-8d28-d229ed2bfe8c", 00:11:49.632 "is_configured": true, 00:11:49.632 "data_offset": 2048, 00:11:49.632 "data_size": 63488 00:11:49.632 }, 00:11:49.632 { 00:11:49.632 "name": "BaseBdev2", 00:11:49.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.632 "is_configured": false, 00:11:49.632 "data_offset": 0, 00:11:49.632 "data_size": 0 00:11:49.632 }, 00:11:49.632 { 00:11:49.632 "name": "BaseBdev3", 00:11:49.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.632 "is_configured": false, 00:11:49.632 "data_offset": 0, 00:11:49.632 "data_size": 0 00:11:49.632 }, 00:11:49.632 { 00:11:49.632 "name": "BaseBdev4", 00:11:49.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.632 "is_configured": false, 00:11:49.632 "data_offset": 0, 00:11:49.632 "data_size": 0 00:11:49.632 } 00:11:49.632 ] 00:11:49.632 }' 00:11:49.632 08:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.632 08:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.893 [2024-10-05 08:48:26.266176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.893 BaseBdev2 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.893 [ 00:11:49.893 { 00:11:49.893 "name": "BaseBdev2", 00:11:49.893 "aliases": [ 00:11:49.893 "32673a85-eb08-48ef-ae00-d63dbedbb424" 00:11:49.893 ], 00:11:49.893 "product_name": "Malloc disk", 00:11:49.893 "block_size": 512, 00:11:49.893 "num_blocks": 65536, 00:11:49.893 "uuid": "32673a85-eb08-48ef-ae00-d63dbedbb424", 00:11:49.893 "assigned_rate_limits": { 00:11:49.893 "rw_ios_per_sec": 0, 00:11:49.893 "rw_mbytes_per_sec": 0, 00:11:49.893 "r_mbytes_per_sec": 0, 00:11:49.893 "w_mbytes_per_sec": 0 00:11:49.893 }, 00:11:49.893 "claimed": true, 00:11:49.893 "claim_type": "exclusive_write", 00:11:49.893 "zoned": false, 00:11:49.893 "supported_io_types": { 00:11:49.893 "read": true, 00:11:49.893 "write": true, 00:11:49.893 "unmap": true, 00:11:49.893 "flush": true, 00:11:49.893 "reset": true, 00:11:49.893 "nvme_admin": false, 00:11:49.893 "nvme_io": false, 00:11:49.893 "nvme_io_md": false, 00:11:49.893 "write_zeroes": true, 00:11:49.893 "zcopy": true, 00:11:49.893 "get_zone_info": false, 00:11:49.893 "zone_management": false, 00:11:49.893 "zone_append": false, 00:11:49.893 "compare": false, 00:11:49.893 "compare_and_write": false, 00:11:49.893 "abort": true, 00:11:49.893 "seek_hole": false, 00:11:49.893 "seek_data": false, 00:11:49.893 "copy": true, 00:11:49.893 "nvme_iov_md": false 00:11:49.893 }, 00:11:49.893 "memory_domains": [ 00:11:49.893 { 00:11:49.893 "dma_device_id": "system", 00:11:49.893 "dma_device_type": 1 00:11:49.893 }, 00:11:49.893 { 00:11:49.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.893 "dma_device_type": 2 00:11:49.893 } 00:11:49.893 ], 00:11:49.893 "driver_specific": {} 00:11:49.893 } 00:11:49.893 ] 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.893 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.893 "name": "Existed_Raid", 00:11:49.893 "uuid": "cb201fbb-25f1-4b6c-b6d5-80faae8e1a89", 00:11:49.893 "strip_size_kb": 0, 00:11:49.893 "state": "configuring", 00:11:49.893 "raid_level": "raid1", 00:11:49.893 "superblock": true, 00:11:49.893 "num_base_bdevs": 4, 00:11:49.893 "num_base_bdevs_discovered": 2, 00:11:49.893 "num_base_bdevs_operational": 4, 00:11:49.893 "base_bdevs_list": [ 00:11:49.893 { 00:11:49.893 "name": "BaseBdev1", 00:11:49.893 "uuid": "a7e3db74-f952-47ed-8d28-d229ed2bfe8c", 00:11:49.893 "is_configured": true, 00:11:49.893 "data_offset": 2048, 00:11:49.893 "data_size": 63488 00:11:49.893 }, 00:11:49.893 { 00:11:49.893 "name": "BaseBdev2", 00:11:49.893 "uuid": "32673a85-eb08-48ef-ae00-d63dbedbb424", 00:11:49.894 "is_configured": true, 00:11:49.894 "data_offset": 2048, 00:11:49.894 "data_size": 63488 00:11:49.894 }, 00:11:49.894 { 00:11:49.894 "name": "BaseBdev3", 00:11:49.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.894 "is_configured": false, 00:11:49.894 "data_offset": 0, 00:11:49.894 "data_size": 0 00:11:49.894 }, 00:11:49.894 { 00:11:49.894 "name": "BaseBdev4", 00:11:49.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.894 "is_configured": false, 00:11:49.894 "data_offset": 0, 00:11:49.894 "data_size": 0 00:11:49.894 } 00:11:49.894 ] 00:11:49.894 }' 00:11:49.894 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.894 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.465 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:50.465 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.465 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.466 [2024-10-05 08:48:26.796540] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:50.466 BaseBdev3 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.466 [ 00:11:50.466 { 00:11:50.466 "name": "BaseBdev3", 00:11:50.466 "aliases": [ 00:11:50.466 "3af80d5e-885a-44c2-81d2-1f10803eabea" 00:11:50.466 ], 00:11:50.466 "product_name": "Malloc disk", 00:11:50.466 "block_size": 512, 00:11:50.466 "num_blocks": 65536, 00:11:50.466 "uuid": "3af80d5e-885a-44c2-81d2-1f10803eabea", 00:11:50.466 "assigned_rate_limits": { 00:11:50.466 "rw_ios_per_sec": 0, 00:11:50.466 "rw_mbytes_per_sec": 0, 00:11:50.466 "r_mbytes_per_sec": 0, 00:11:50.466 "w_mbytes_per_sec": 0 00:11:50.466 }, 00:11:50.466 "claimed": true, 00:11:50.466 "claim_type": "exclusive_write", 00:11:50.466 "zoned": false, 00:11:50.466 "supported_io_types": { 00:11:50.466 "read": true, 00:11:50.466 "write": true, 00:11:50.466 "unmap": true, 00:11:50.466 "flush": true, 00:11:50.466 "reset": true, 00:11:50.466 "nvme_admin": false, 00:11:50.466 "nvme_io": false, 00:11:50.466 "nvme_io_md": false, 00:11:50.466 "write_zeroes": true, 00:11:50.466 "zcopy": true, 00:11:50.466 "get_zone_info": false, 00:11:50.466 "zone_management": false, 00:11:50.466 "zone_append": false, 00:11:50.466 "compare": false, 00:11:50.466 "compare_and_write": false, 00:11:50.466 "abort": true, 00:11:50.466 "seek_hole": false, 00:11:50.466 "seek_data": false, 00:11:50.466 "copy": true, 00:11:50.466 "nvme_iov_md": false 00:11:50.466 }, 00:11:50.466 "memory_domains": [ 00:11:50.466 { 00:11:50.466 "dma_device_id": "system", 00:11:50.466 "dma_device_type": 1 00:11:50.466 }, 00:11:50.466 { 00:11:50.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.466 "dma_device_type": 2 00:11:50.466 } 00:11:50.466 ], 00:11:50.466 "driver_specific": {} 00:11:50.466 } 00:11:50.466 ] 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.466 "name": "Existed_Raid", 00:11:50.466 "uuid": "cb201fbb-25f1-4b6c-b6d5-80faae8e1a89", 00:11:50.466 "strip_size_kb": 0, 00:11:50.466 "state": "configuring", 00:11:50.466 "raid_level": "raid1", 00:11:50.466 "superblock": true, 00:11:50.466 "num_base_bdevs": 4, 00:11:50.466 "num_base_bdevs_discovered": 3, 00:11:50.466 "num_base_bdevs_operational": 4, 00:11:50.466 "base_bdevs_list": [ 00:11:50.466 { 00:11:50.466 "name": "BaseBdev1", 00:11:50.466 "uuid": "a7e3db74-f952-47ed-8d28-d229ed2bfe8c", 00:11:50.466 "is_configured": true, 00:11:50.466 "data_offset": 2048, 00:11:50.466 "data_size": 63488 00:11:50.466 }, 00:11:50.466 { 00:11:50.466 "name": "BaseBdev2", 00:11:50.466 "uuid": "32673a85-eb08-48ef-ae00-d63dbedbb424", 00:11:50.466 "is_configured": true, 00:11:50.466 "data_offset": 2048, 00:11:50.466 "data_size": 63488 00:11:50.466 }, 00:11:50.466 { 00:11:50.466 "name": "BaseBdev3", 00:11:50.466 "uuid": "3af80d5e-885a-44c2-81d2-1f10803eabea", 00:11:50.466 "is_configured": true, 00:11:50.466 "data_offset": 2048, 00:11:50.466 "data_size": 63488 00:11:50.466 }, 00:11:50.466 { 00:11:50.466 "name": "BaseBdev4", 00:11:50.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.466 "is_configured": false, 00:11:50.466 "data_offset": 0, 00:11:50.466 "data_size": 0 00:11:50.466 } 00:11:50.466 ] 00:11:50.466 }' 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.466 08:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.037 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:51.037 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.037 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.037 [2024-10-05 08:48:27.268919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:51.037 [2024-10-05 08:48:27.269335] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:51.037 [2024-10-05 08:48:27.269364] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.037 [2024-10-05 08:48:27.269667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:51.037 [2024-10-05 08:48:27.269839] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:51.037 [2024-10-05 08:48:27.269854] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:51.037 BaseBdev4 00:11:51.037 [2024-10-05 08:48:27.270024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.037 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.037 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:51.037 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:51.037 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.038 [ 00:11:51.038 { 00:11:51.038 "name": "BaseBdev4", 00:11:51.038 "aliases": [ 00:11:51.038 "0600825b-207e-4d19-8ff6-ede401717e5f" 00:11:51.038 ], 00:11:51.038 "product_name": "Malloc disk", 00:11:51.038 "block_size": 512, 00:11:51.038 "num_blocks": 65536, 00:11:51.038 "uuid": "0600825b-207e-4d19-8ff6-ede401717e5f", 00:11:51.038 "assigned_rate_limits": { 00:11:51.038 "rw_ios_per_sec": 0, 00:11:51.038 "rw_mbytes_per_sec": 0, 00:11:51.038 "r_mbytes_per_sec": 0, 00:11:51.038 "w_mbytes_per_sec": 0 00:11:51.038 }, 00:11:51.038 "claimed": true, 00:11:51.038 "claim_type": "exclusive_write", 00:11:51.038 "zoned": false, 00:11:51.038 "supported_io_types": { 00:11:51.038 "read": true, 00:11:51.038 "write": true, 00:11:51.038 "unmap": true, 00:11:51.038 "flush": true, 00:11:51.038 "reset": true, 00:11:51.038 "nvme_admin": false, 00:11:51.038 "nvme_io": false, 00:11:51.038 "nvme_io_md": false, 00:11:51.038 "write_zeroes": true, 00:11:51.038 "zcopy": true, 00:11:51.038 "get_zone_info": false, 00:11:51.038 "zone_management": false, 00:11:51.038 "zone_append": false, 00:11:51.038 "compare": false, 00:11:51.038 "compare_and_write": false, 00:11:51.038 "abort": true, 00:11:51.038 "seek_hole": false, 00:11:51.038 "seek_data": false, 00:11:51.038 "copy": true, 00:11:51.038 "nvme_iov_md": false 00:11:51.038 }, 00:11:51.038 "memory_domains": [ 00:11:51.038 { 00:11:51.038 "dma_device_id": "system", 00:11:51.038 "dma_device_type": 1 00:11:51.038 }, 00:11:51.038 { 00:11:51.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.038 "dma_device_type": 2 00:11:51.038 } 00:11:51.038 ], 00:11:51.038 "driver_specific": {} 00:11:51.038 } 00:11:51.038 ] 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.038 "name": "Existed_Raid", 00:11:51.038 "uuid": "cb201fbb-25f1-4b6c-b6d5-80faae8e1a89", 00:11:51.038 "strip_size_kb": 0, 00:11:51.038 "state": "online", 00:11:51.038 "raid_level": "raid1", 00:11:51.038 "superblock": true, 00:11:51.038 "num_base_bdevs": 4, 00:11:51.038 "num_base_bdevs_discovered": 4, 00:11:51.038 "num_base_bdevs_operational": 4, 00:11:51.038 "base_bdevs_list": [ 00:11:51.038 { 00:11:51.038 "name": "BaseBdev1", 00:11:51.038 "uuid": "a7e3db74-f952-47ed-8d28-d229ed2bfe8c", 00:11:51.038 "is_configured": true, 00:11:51.038 "data_offset": 2048, 00:11:51.038 "data_size": 63488 00:11:51.038 }, 00:11:51.038 { 00:11:51.038 "name": "BaseBdev2", 00:11:51.038 "uuid": "32673a85-eb08-48ef-ae00-d63dbedbb424", 00:11:51.038 "is_configured": true, 00:11:51.038 "data_offset": 2048, 00:11:51.038 "data_size": 63488 00:11:51.038 }, 00:11:51.038 { 00:11:51.038 "name": "BaseBdev3", 00:11:51.038 "uuid": "3af80d5e-885a-44c2-81d2-1f10803eabea", 00:11:51.038 "is_configured": true, 00:11:51.038 "data_offset": 2048, 00:11:51.038 "data_size": 63488 00:11:51.038 }, 00:11:51.038 { 00:11:51.038 "name": "BaseBdev4", 00:11:51.038 "uuid": "0600825b-207e-4d19-8ff6-ede401717e5f", 00:11:51.038 "is_configured": true, 00:11:51.038 "data_offset": 2048, 00:11:51.038 "data_size": 63488 00:11:51.038 } 00:11:51.038 ] 00:11:51.038 }' 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.038 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.332 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:51.332 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:51.332 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:51.333 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:51.333 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:51.333 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:51.333 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:51.333 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.333 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.333 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:51.333 [2024-10-05 08:48:27.776396] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.333 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.593 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:51.593 "name": "Existed_Raid", 00:11:51.593 "aliases": [ 00:11:51.593 "cb201fbb-25f1-4b6c-b6d5-80faae8e1a89" 00:11:51.593 ], 00:11:51.593 "product_name": "Raid Volume", 00:11:51.593 "block_size": 512, 00:11:51.593 "num_blocks": 63488, 00:11:51.593 "uuid": "cb201fbb-25f1-4b6c-b6d5-80faae8e1a89", 00:11:51.593 "assigned_rate_limits": { 00:11:51.593 "rw_ios_per_sec": 0, 00:11:51.593 "rw_mbytes_per_sec": 0, 00:11:51.593 "r_mbytes_per_sec": 0, 00:11:51.593 "w_mbytes_per_sec": 0 00:11:51.593 }, 00:11:51.593 "claimed": false, 00:11:51.593 "zoned": false, 00:11:51.593 "supported_io_types": { 00:11:51.593 "read": true, 00:11:51.593 "write": true, 00:11:51.593 "unmap": false, 00:11:51.593 "flush": false, 00:11:51.593 "reset": true, 00:11:51.593 "nvme_admin": false, 00:11:51.593 "nvme_io": false, 00:11:51.593 "nvme_io_md": false, 00:11:51.593 "write_zeroes": true, 00:11:51.593 "zcopy": false, 00:11:51.593 "get_zone_info": false, 00:11:51.593 "zone_management": false, 00:11:51.593 "zone_append": false, 00:11:51.593 "compare": false, 00:11:51.593 "compare_and_write": false, 00:11:51.593 "abort": false, 00:11:51.593 "seek_hole": false, 00:11:51.593 "seek_data": false, 00:11:51.593 "copy": false, 00:11:51.593 "nvme_iov_md": false 00:11:51.593 }, 00:11:51.593 "memory_domains": [ 00:11:51.593 { 00:11:51.593 "dma_device_id": "system", 00:11:51.593 "dma_device_type": 1 00:11:51.593 }, 00:11:51.593 { 00:11:51.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.593 "dma_device_type": 2 00:11:51.593 }, 00:11:51.594 { 00:11:51.594 "dma_device_id": "system", 00:11:51.594 "dma_device_type": 1 00:11:51.594 }, 00:11:51.594 { 00:11:51.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.594 "dma_device_type": 2 00:11:51.594 }, 00:11:51.594 { 00:11:51.594 "dma_device_id": "system", 00:11:51.594 "dma_device_type": 1 00:11:51.594 }, 00:11:51.594 { 00:11:51.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.594 "dma_device_type": 2 00:11:51.594 }, 00:11:51.594 { 00:11:51.594 "dma_device_id": "system", 00:11:51.594 "dma_device_type": 1 00:11:51.594 }, 00:11:51.594 { 00:11:51.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.594 "dma_device_type": 2 00:11:51.594 } 00:11:51.594 ], 00:11:51.594 "driver_specific": { 00:11:51.594 "raid": { 00:11:51.594 "uuid": "cb201fbb-25f1-4b6c-b6d5-80faae8e1a89", 00:11:51.594 "strip_size_kb": 0, 00:11:51.594 "state": "online", 00:11:51.594 "raid_level": "raid1", 00:11:51.594 "superblock": true, 00:11:51.594 "num_base_bdevs": 4, 00:11:51.594 "num_base_bdevs_discovered": 4, 00:11:51.594 "num_base_bdevs_operational": 4, 00:11:51.594 "base_bdevs_list": [ 00:11:51.594 { 00:11:51.594 "name": "BaseBdev1", 00:11:51.594 "uuid": "a7e3db74-f952-47ed-8d28-d229ed2bfe8c", 00:11:51.594 "is_configured": true, 00:11:51.594 "data_offset": 2048, 00:11:51.594 "data_size": 63488 00:11:51.594 }, 00:11:51.594 { 00:11:51.594 "name": "BaseBdev2", 00:11:51.594 "uuid": "32673a85-eb08-48ef-ae00-d63dbedbb424", 00:11:51.594 "is_configured": true, 00:11:51.594 "data_offset": 2048, 00:11:51.594 "data_size": 63488 00:11:51.594 }, 00:11:51.594 { 00:11:51.594 "name": "BaseBdev3", 00:11:51.594 "uuid": "3af80d5e-885a-44c2-81d2-1f10803eabea", 00:11:51.594 "is_configured": true, 00:11:51.594 "data_offset": 2048, 00:11:51.594 "data_size": 63488 00:11:51.594 }, 00:11:51.594 { 00:11:51.594 "name": "BaseBdev4", 00:11:51.594 "uuid": "0600825b-207e-4d19-8ff6-ede401717e5f", 00:11:51.594 "is_configured": true, 00:11:51.594 "data_offset": 2048, 00:11:51.594 "data_size": 63488 00:11:51.594 } 00:11:51.594 ] 00:11:51.594 } 00:11:51.594 } 00:11:51.594 }' 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:51.594 BaseBdev2 00:11:51.594 BaseBdev3 00:11:51.594 BaseBdev4' 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.594 08:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.594 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.594 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.594 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.594 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.594 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.594 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:51.594 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.594 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.594 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.855 [2024-10-05 08:48:28.079598] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.855 "name": "Existed_Raid", 00:11:51.855 "uuid": "cb201fbb-25f1-4b6c-b6d5-80faae8e1a89", 00:11:51.855 "strip_size_kb": 0, 00:11:51.855 "state": "online", 00:11:51.855 "raid_level": "raid1", 00:11:51.855 "superblock": true, 00:11:51.855 "num_base_bdevs": 4, 00:11:51.855 "num_base_bdevs_discovered": 3, 00:11:51.855 "num_base_bdevs_operational": 3, 00:11:51.855 "base_bdevs_list": [ 00:11:51.855 { 00:11:51.855 "name": null, 00:11:51.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.855 "is_configured": false, 00:11:51.855 "data_offset": 0, 00:11:51.855 "data_size": 63488 00:11:51.855 }, 00:11:51.855 { 00:11:51.855 "name": "BaseBdev2", 00:11:51.855 "uuid": "32673a85-eb08-48ef-ae00-d63dbedbb424", 00:11:51.855 "is_configured": true, 00:11:51.855 "data_offset": 2048, 00:11:51.855 "data_size": 63488 00:11:51.855 }, 00:11:51.855 { 00:11:51.855 "name": "BaseBdev3", 00:11:51.855 "uuid": "3af80d5e-885a-44c2-81d2-1f10803eabea", 00:11:51.855 "is_configured": true, 00:11:51.855 "data_offset": 2048, 00:11:51.855 "data_size": 63488 00:11:51.855 }, 00:11:51.855 { 00:11:51.855 "name": "BaseBdev4", 00:11:51.855 "uuid": "0600825b-207e-4d19-8ff6-ede401717e5f", 00:11:51.855 "is_configured": true, 00:11:51.855 "data_offset": 2048, 00:11:51.855 "data_size": 63488 00:11:51.855 } 00:11:51.855 ] 00:11:51.855 }' 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.855 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.424 [2024-10-05 08:48:28.652757] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.424 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.424 [2024-10-05 08:48:28.814094] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.685 08:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.685 [2024-10-05 08:48:28.953795] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:52.685 [2024-10-05 08:48:28.953921] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.685 [2024-10-05 08:48:29.053941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.685 [2024-10-05 08:48:29.054024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.685 [2024-10-05 08:48:29.054038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.685 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.685 BaseBdev2 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.946 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.946 [ 00:11:52.946 { 00:11:52.946 "name": "BaseBdev2", 00:11:52.946 "aliases": [ 00:11:52.946 "75a2f463-7a17-406f-839b-749683a2fc34" 00:11:52.946 ], 00:11:52.946 "product_name": "Malloc disk", 00:11:52.946 "block_size": 512, 00:11:52.946 "num_blocks": 65536, 00:11:52.946 "uuid": "75a2f463-7a17-406f-839b-749683a2fc34", 00:11:52.946 "assigned_rate_limits": { 00:11:52.947 "rw_ios_per_sec": 0, 00:11:52.947 "rw_mbytes_per_sec": 0, 00:11:52.947 "r_mbytes_per_sec": 0, 00:11:52.947 "w_mbytes_per_sec": 0 00:11:52.947 }, 00:11:52.947 "claimed": false, 00:11:52.947 "zoned": false, 00:11:52.947 "supported_io_types": { 00:11:52.947 "read": true, 00:11:52.947 "write": true, 00:11:52.947 "unmap": true, 00:11:52.947 "flush": true, 00:11:52.947 "reset": true, 00:11:52.947 "nvme_admin": false, 00:11:52.947 "nvme_io": false, 00:11:52.947 "nvme_io_md": false, 00:11:52.947 "write_zeroes": true, 00:11:52.947 "zcopy": true, 00:11:52.947 "get_zone_info": false, 00:11:52.947 "zone_management": false, 00:11:52.947 "zone_append": false, 00:11:52.947 "compare": false, 00:11:52.947 "compare_and_write": false, 00:11:52.947 "abort": true, 00:11:52.947 "seek_hole": false, 00:11:52.947 "seek_data": false, 00:11:52.947 "copy": true, 00:11:52.947 "nvme_iov_md": false 00:11:52.947 }, 00:11:52.947 "memory_domains": [ 00:11:52.947 { 00:11:52.947 "dma_device_id": "system", 00:11:52.947 "dma_device_type": 1 00:11:52.947 }, 00:11:52.947 { 00:11:52.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.947 "dma_device_type": 2 00:11:52.947 } 00:11:52.947 ], 00:11:52.947 "driver_specific": {} 00:11:52.947 } 00:11:52.947 ] 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 BaseBdev3 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 [ 00:11:52.947 { 00:11:52.947 "name": "BaseBdev3", 00:11:52.947 "aliases": [ 00:11:52.947 "63201b02-4038-4bf9-9317-3836372981f2" 00:11:52.947 ], 00:11:52.947 "product_name": "Malloc disk", 00:11:52.947 "block_size": 512, 00:11:52.947 "num_blocks": 65536, 00:11:52.947 "uuid": "63201b02-4038-4bf9-9317-3836372981f2", 00:11:52.947 "assigned_rate_limits": { 00:11:52.947 "rw_ios_per_sec": 0, 00:11:52.947 "rw_mbytes_per_sec": 0, 00:11:52.947 "r_mbytes_per_sec": 0, 00:11:52.947 "w_mbytes_per_sec": 0 00:11:52.947 }, 00:11:52.947 "claimed": false, 00:11:52.947 "zoned": false, 00:11:52.947 "supported_io_types": { 00:11:52.947 "read": true, 00:11:52.947 "write": true, 00:11:52.947 "unmap": true, 00:11:52.947 "flush": true, 00:11:52.947 "reset": true, 00:11:52.947 "nvme_admin": false, 00:11:52.947 "nvme_io": false, 00:11:52.947 "nvme_io_md": false, 00:11:52.947 "write_zeroes": true, 00:11:52.947 "zcopy": true, 00:11:52.947 "get_zone_info": false, 00:11:52.947 "zone_management": false, 00:11:52.947 "zone_append": false, 00:11:52.947 "compare": false, 00:11:52.947 "compare_and_write": false, 00:11:52.947 "abort": true, 00:11:52.947 "seek_hole": false, 00:11:52.947 "seek_data": false, 00:11:52.947 "copy": true, 00:11:52.947 "nvme_iov_md": false 00:11:52.947 }, 00:11:52.947 "memory_domains": [ 00:11:52.947 { 00:11:52.947 "dma_device_id": "system", 00:11:52.947 "dma_device_type": 1 00:11:52.947 }, 00:11:52.947 { 00:11:52.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.947 "dma_device_type": 2 00:11:52.947 } 00:11:52.947 ], 00:11:52.947 "driver_specific": {} 00:11:52.947 } 00:11:52.947 ] 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 BaseBdev4 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 [ 00:11:52.947 { 00:11:52.947 "name": "BaseBdev4", 00:11:52.947 "aliases": [ 00:11:52.947 "0921dee9-ebc8-4f80-8b8b-c27f11694f9b" 00:11:52.947 ], 00:11:52.947 "product_name": "Malloc disk", 00:11:52.947 "block_size": 512, 00:11:52.947 "num_blocks": 65536, 00:11:52.947 "uuid": "0921dee9-ebc8-4f80-8b8b-c27f11694f9b", 00:11:52.947 "assigned_rate_limits": { 00:11:52.947 "rw_ios_per_sec": 0, 00:11:52.947 "rw_mbytes_per_sec": 0, 00:11:52.947 "r_mbytes_per_sec": 0, 00:11:52.947 "w_mbytes_per_sec": 0 00:11:52.947 }, 00:11:52.947 "claimed": false, 00:11:52.947 "zoned": false, 00:11:52.947 "supported_io_types": { 00:11:52.947 "read": true, 00:11:52.947 "write": true, 00:11:52.947 "unmap": true, 00:11:52.947 "flush": true, 00:11:52.947 "reset": true, 00:11:52.947 "nvme_admin": false, 00:11:52.947 "nvme_io": false, 00:11:52.947 "nvme_io_md": false, 00:11:52.947 "write_zeroes": true, 00:11:52.947 "zcopy": true, 00:11:52.947 "get_zone_info": false, 00:11:52.947 "zone_management": false, 00:11:52.947 "zone_append": false, 00:11:52.947 "compare": false, 00:11:52.947 "compare_and_write": false, 00:11:52.947 "abort": true, 00:11:52.947 "seek_hole": false, 00:11:52.947 "seek_data": false, 00:11:52.947 "copy": true, 00:11:52.947 "nvme_iov_md": false 00:11:52.947 }, 00:11:52.947 "memory_domains": [ 00:11:52.947 { 00:11:52.947 "dma_device_id": "system", 00:11:52.947 "dma_device_type": 1 00:11:52.947 }, 00:11:52.947 { 00:11:52.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.947 "dma_device_type": 2 00:11:52.947 } 00:11:52.947 ], 00:11:52.947 "driver_specific": {} 00:11:52.947 } 00:11:52.947 ] 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.947 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.947 [2024-10-05 08:48:29.368713] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.947 [2024-10-05 08:48:29.368831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.948 [2024-10-05 08:48:29.368877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.948 [2024-10-05 08:48:29.370790] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:52.948 [2024-10-05 08:48:29.370884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.948 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.207 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.208 "name": "Existed_Raid", 00:11:53.208 "uuid": "13b1420b-4c66-4fb5-9583-d30906a56310", 00:11:53.208 "strip_size_kb": 0, 00:11:53.208 "state": "configuring", 00:11:53.208 "raid_level": "raid1", 00:11:53.208 "superblock": true, 00:11:53.208 "num_base_bdevs": 4, 00:11:53.208 "num_base_bdevs_discovered": 3, 00:11:53.208 "num_base_bdevs_operational": 4, 00:11:53.208 "base_bdevs_list": [ 00:11:53.208 { 00:11:53.208 "name": "BaseBdev1", 00:11:53.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.208 "is_configured": false, 00:11:53.208 "data_offset": 0, 00:11:53.208 "data_size": 0 00:11:53.208 }, 00:11:53.208 { 00:11:53.208 "name": "BaseBdev2", 00:11:53.208 "uuid": "75a2f463-7a17-406f-839b-749683a2fc34", 00:11:53.208 "is_configured": true, 00:11:53.208 "data_offset": 2048, 00:11:53.208 "data_size": 63488 00:11:53.208 }, 00:11:53.208 { 00:11:53.208 "name": "BaseBdev3", 00:11:53.208 "uuid": "63201b02-4038-4bf9-9317-3836372981f2", 00:11:53.208 "is_configured": true, 00:11:53.208 "data_offset": 2048, 00:11:53.208 "data_size": 63488 00:11:53.208 }, 00:11:53.208 { 00:11:53.208 "name": "BaseBdev4", 00:11:53.208 "uuid": "0921dee9-ebc8-4f80-8b8b-c27f11694f9b", 00:11:53.208 "is_configured": true, 00:11:53.208 "data_offset": 2048, 00:11:53.208 "data_size": 63488 00:11:53.208 } 00:11:53.208 ] 00:11:53.208 }' 00:11:53.208 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.208 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.468 [2024-10-05 08:48:29.767992] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.468 "name": "Existed_Raid", 00:11:53.468 "uuid": "13b1420b-4c66-4fb5-9583-d30906a56310", 00:11:53.468 "strip_size_kb": 0, 00:11:53.468 "state": "configuring", 00:11:53.468 "raid_level": "raid1", 00:11:53.468 "superblock": true, 00:11:53.468 "num_base_bdevs": 4, 00:11:53.468 "num_base_bdevs_discovered": 2, 00:11:53.468 "num_base_bdevs_operational": 4, 00:11:53.468 "base_bdevs_list": [ 00:11:53.468 { 00:11:53.468 "name": "BaseBdev1", 00:11:53.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.468 "is_configured": false, 00:11:53.468 "data_offset": 0, 00:11:53.468 "data_size": 0 00:11:53.468 }, 00:11:53.468 { 00:11:53.468 "name": null, 00:11:53.468 "uuid": "75a2f463-7a17-406f-839b-749683a2fc34", 00:11:53.468 "is_configured": false, 00:11:53.468 "data_offset": 0, 00:11:53.468 "data_size": 63488 00:11:53.468 }, 00:11:53.468 { 00:11:53.468 "name": "BaseBdev3", 00:11:53.468 "uuid": "63201b02-4038-4bf9-9317-3836372981f2", 00:11:53.468 "is_configured": true, 00:11:53.468 "data_offset": 2048, 00:11:53.468 "data_size": 63488 00:11:53.468 }, 00:11:53.468 { 00:11:53.468 "name": "BaseBdev4", 00:11:53.468 "uuid": "0921dee9-ebc8-4f80-8b8b-c27f11694f9b", 00:11:53.468 "is_configured": true, 00:11:53.468 "data_offset": 2048, 00:11:53.468 "data_size": 63488 00:11:53.468 } 00:11:53.468 ] 00:11:53.468 }' 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.468 08:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.727 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.727 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.727 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.727 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.987 [2024-10-05 08:48:30.282167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.987 BaseBdev1 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:53.987 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.988 [ 00:11:53.988 { 00:11:53.988 "name": "BaseBdev1", 00:11:53.988 "aliases": [ 00:11:53.988 "c7677c3e-1bac-4577-8083-eca85da9d9a9" 00:11:53.988 ], 00:11:53.988 "product_name": "Malloc disk", 00:11:53.988 "block_size": 512, 00:11:53.988 "num_blocks": 65536, 00:11:53.988 "uuid": "c7677c3e-1bac-4577-8083-eca85da9d9a9", 00:11:53.988 "assigned_rate_limits": { 00:11:53.988 "rw_ios_per_sec": 0, 00:11:53.988 "rw_mbytes_per_sec": 0, 00:11:53.988 "r_mbytes_per_sec": 0, 00:11:53.988 "w_mbytes_per_sec": 0 00:11:53.988 }, 00:11:53.988 "claimed": true, 00:11:53.988 "claim_type": "exclusive_write", 00:11:53.988 "zoned": false, 00:11:53.988 "supported_io_types": { 00:11:53.988 "read": true, 00:11:53.988 "write": true, 00:11:53.988 "unmap": true, 00:11:53.988 "flush": true, 00:11:53.988 "reset": true, 00:11:53.988 "nvme_admin": false, 00:11:53.988 "nvme_io": false, 00:11:53.988 "nvme_io_md": false, 00:11:53.988 "write_zeroes": true, 00:11:53.988 "zcopy": true, 00:11:53.988 "get_zone_info": false, 00:11:53.988 "zone_management": false, 00:11:53.988 "zone_append": false, 00:11:53.988 "compare": false, 00:11:53.988 "compare_and_write": false, 00:11:53.988 "abort": true, 00:11:53.988 "seek_hole": false, 00:11:53.988 "seek_data": false, 00:11:53.988 "copy": true, 00:11:53.988 "nvme_iov_md": false 00:11:53.988 }, 00:11:53.988 "memory_domains": [ 00:11:53.988 { 00:11:53.988 "dma_device_id": "system", 00:11:53.988 "dma_device_type": 1 00:11:53.988 }, 00:11:53.988 { 00:11:53.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.988 "dma_device_type": 2 00:11:53.988 } 00:11:53.988 ], 00:11:53.988 "driver_specific": {} 00:11:53.988 } 00:11:53.988 ] 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.988 "name": "Existed_Raid", 00:11:53.988 "uuid": "13b1420b-4c66-4fb5-9583-d30906a56310", 00:11:53.988 "strip_size_kb": 0, 00:11:53.988 "state": "configuring", 00:11:53.988 "raid_level": "raid1", 00:11:53.988 "superblock": true, 00:11:53.988 "num_base_bdevs": 4, 00:11:53.988 "num_base_bdevs_discovered": 3, 00:11:53.988 "num_base_bdevs_operational": 4, 00:11:53.988 "base_bdevs_list": [ 00:11:53.988 { 00:11:53.988 "name": "BaseBdev1", 00:11:53.988 "uuid": "c7677c3e-1bac-4577-8083-eca85da9d9a9", 00:11:53.988 "is_configured": true, 00:11:53.988 "data_offset": 2048, 00:11:53.988 "data_size": 63488 00:11:53.988 }, 00:11:53.988 { 00:11:53.988 "name": null, 00:11:53.988 "uuid": "75a2f463-7a17-406f-839b-749683a2fc34", 00:11:53.988 "is_configured": false, 00:11:53.988 "data_offset": 0, 00:11:53.988 "data_size": 63488 00:11:53.988 }, 00:11:53.988 { 00:11:53.988 "name": "BaseBdev3", 00:11:53.988 "uuid": "63201b02-4038-4bf9-9317-3836372981f2", 00:11:53.988 "is_configured": true, 00:11:53.988 "data_offset": 2048, 00:11:53.988 "data_size": 63488 00:11:53.988 }, 00:11:53.988 { 00:11:53.988 "name": "BaseBdev4", 00:11:53.988 "uuid": "0921dee9-ebc8-4f80-8b8b-c27f11694f9b", 00:11:53.988 "is_configured": true, 00:11:53.988 "data_offset": 2048, 00:11:53.988 "data_size": 63488 00:11:53.988 } 00:11:53.988 ] 00:11:53.988 }' 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.988 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.599 [2024-10-05 08:48:30.805333] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.599 "name": "Existed_Raid", 00:11:54.599 "uuid": "13b1420b-4c66-4fb5-9583-d30906a56310", 00:11:54.599 "strip_size_kb": 0, 00:11:54.599 "state": "configuring", 00:11:54.599 "raid_level": "raid1", 00:11:54.599 "superblock": true, 00:11:54.599 "num_base_bdevs": 4, 00:11:54.599 "num_base_bdevs_discovered": 2, 00:11:54.599 "num_base_bdevs_operational": 4, 00:11:54.599 "base_bdevs_list": [ 00:11:54.599 { 00:11:54.599 "name": "BaseBdev1", 00:11:54.599 "uuid": "c7677c3e-1bac-4577-8083-eca85da9d9a9", 00:11:54.599 "is_configured": true, 00:11:54.599 "data_offset": 2048, 00:11:54.599 "data_size": 63488 00:11:54.599 }, 00:11:54.599 { 00:11:54.599 "name": null, 00:11:54.599 "uuid": "75a2f463-7a17-406f-839b-749683a2fc34", 00:11:54.599 "is_configured": false, 00:11:54.599 "data_offset": 0, 00:11:54.599 "data_size": 63488 00:11:54.599 }, 00:11:54.599 { 00:11:54.599 "name": null, 00:11:54.599 "uuid": "63201b02-4038-4bf9-9317-3836372981f2", 00:11:54.599 "is_configured": false, 00:11:54.599 "data_offset": 0, 00:11:54.599 "data_size": 63488 00:11:54.599 }, 00:11:54.599 { 00:11:54.599 "name": "BaseBdev4", 00:11:54.599 "uuid": "0921dee9-ebc8-4f80-8b8b-c27f11694f9b", 00:11:54.599 "is_configured": true, 00:11:54.599 "data_offset": 2048, 00:11:54.599 "data_size": 63488 00:11:54.599 } 00:11:54.599 ] 00:11:54.599 }' 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.599 08:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.859 [2024-10-05 08:48:31.288520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.859 "name": "Existed_Raid", 00:11:54.859 "uuid": "13b1420b-4c66-4fb5-9583-d30906a56310", 00:11:54.859 "strip_size_kb": 0, 00:11:54.859 "state": "configuring", 00:11:54.859 "raid_level": "raid1", 00:11:54.859 "superblock": true, 00:11:54.859 "num_base_bdevs": 4, 00:11:54.859 "num_base_bdevs_discovered": 3, 00:11:54.859 "num_base_bdevs_operational": 4, 00:11:54.859 "base_bdevs_list": [ 00:11:54.859 { 00:11:54.859 "name": "BaseBdev1", 00:11:54.859 "uuid": "c7677c3e-1bac-4577-8083-eca85da9d9a9", 00:11:54.859 "is_configured": true, 00:11:54.859 "data_offset": 2048, 00:11:54.859 "data_size": 63488 00:11:54.859 }, 00:11:54.859 { 00:11:54.859 "name": null, 00:11:54.859 "uuid": "75a2f463-7a17-406f-839b-749683a2fc34", 00:11:54.859 "is_configured": false, 00:11:54.859 "data_offset": 0, 00:11:54.859 "data_size": 63488 00:11:54.859 }, 00:11:54.859 { 00:11:54.859 "name": "BaseBdev3", 00:11:54.859 "uuid": "63201b02-4038-4bf9-9317-3836372981f2", 00:11:54.859 "is_configured": true, 00:11:54.859 "data_offset": 2048, 00:11:54.859 "data_size": 63488 00:11:54.859 }, 00:11:54.859 { 00:11:54.859 "name": "BaseBdev4", 00:11:54.859 "uuid": "0921dee9-ebc8-4f80-8b8b-c27f11694f9b", 00:11:54.859 "is_configured": true, 00:11:54.859 "data_offset": 2048, 00:11:54.859 "data_size": 63488 00:11:54.859 } 00:11:54.859 ] 00:11:54.859 }' 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.859 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.429 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.430 [2024-10-05 08:48:31.743756] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.430 "name": "Existed_Raid", 00:11:55.430 "uuid": "13b1420b-4c66-4fb5-9583-d30906a56310", 00:11:55.430 "strip_size_kb": 0, 00:11:55.430 "state": "configuring", 00:11:55.430 "raid_level": "raid1", 00:11:55.430 "superblock": true, 00:11:55.430 "num_base_bdevs": 4, 00:11:55.430 "num_base_bdevs_discovered": 2, 00:11:55.430 "num_base_bdevs_operational": 4, 00:11:55.430 "base_bdevs_list": [ 00:11:55.430 { 00:11:55.430 "name": null, 00:11:55.430 "uuid": "c7677c3e-1bac-4577-8083-eca85da9d9a9", 00:11:55.430 "is_configured": false, 00:11:55.430 "data_offset": 0, 00:11:55.430 "data_size": 63488 00:11:55.430 }, 00:11:55.430 { 00:11:55.430 "name": null, 00:11:55.430 "uuid": "75a2f463-7a17-406f-839b-749683a2fc34", 00:11:55.430 "is_configured": false, 00:11:55.430 "data_offset": 0, 00:11:55.430 "data_size": 63488 00:11:55.430 }, 00:11:55.430 { 00:11:55.430 "name": "BaseBdev3", 00:11:55.430 "uuid": "63201b02-4038-4bf9-9317-3836372981f2", 00:11:55.430 "is_configured": true, 00:11:55.430 "data_offset": 2048, 00:11:55.430 "data_size": 63488 00:11:55.430 }, 00:11:55.430 { 00:11:55.430 "name": "BaseBdev4", 00:11:55.430 "uuid": "0921dee9-ebc8-4f80-8b8b-c27f11694f9b", 00:11:55.430 "is_configured": true, 00:11:55.430 "data_offset": 2048, 00:11:55.430 "data_size": 63488 00:11:55.430 } 00:11:55.430 ] 00:11:55.430 }' 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.430 08:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.001 [2024-10-05 08:48:32.307366] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.001 "name": "Existed_Raid", 00:11:56.001 "uuid": "13b1420b-4c66-4fb5-9583-d30906a56310", 00:11:56.001 "strip_size_kb": 0, 00:11:56.001 "state": "configuring", 00:11:56.001 "raid_level": "raid1", 00:11:56.001 "superblock": true, 00:11:56.001 "num_base_bdevs": 4, 00:11:56.001 "num_base_bdevs_discovered": 3, 00:11:56.001 "num_base_bdevs_operational": 4, 00:11:56.001 "base_bdevs_list": [ 00:11:56.001 { 00:11:56.001 "name": null, 00:11:56.001 "uuid": "c7677c3e-1bac-4577-8083-eca85da9d9a9", 00:11:56.001 "is_configured": false, 00:11:56.001 "data_offset": 0, 00:11:56.001 "data_size": 63488 00:11:56.001 }, 00:11:56.001 { 00:11:56.001 "name": "BaseBdev2", 00:11:56.001 "uuid": "75a2f463-7a17-406f-839b-749683a2fc34", 00:11:56.001 "is_configured": true, 00:11:56.001 "data_offset": 2048, 00:11:56.001 "data_size": 63488 00:11:56.001 }, 00:11:56.001 { 00:11:56.001 "name": "BaseBdev3", 00:11:56.001 "uuid": "63201b02-4038-4bf9-9317-3836372981f2", 00:11:56.001 "is_configured": true, 00:11:56.001 "data_offset": 2048, 00:11:56.001 "data_size": 63488 00:11:56.001 }, 00:11:56.001 { 00:11:56.001 "name": "BaseBdev4", 00:11:56.001 "uuid": "0921dee9-ebc8-4f80-8b8b-c27f11694f9b", 00:11:56.001 "is_configured": true, 00:11:56.001 "data_offset": 2048, 00:11:56.001 "data_size": 63488 00:11:56.001 } 00:11:56.001 ] 00:11:56.001 }' 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.001 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c7677c3e-1bac-4577-8083-eca85da9d9a9 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.573 [2024-10-05 08:48:32.850861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:56.573 [2024-10-05 08:48:32.851234] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:56.573 [2024-10-05 08:48:32.851297] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:56.573 [2024-10-05 08:48:32.851613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:56.573 [2024-10-05 08:48:32.851817] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:56.573 [2024-10-05 08:48:32.851859] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:11:56.573 id_bdev 0x617000008200 00:11:56.573 [2024-10-05 08:48:32.852054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.573 [ 00:11:56.573 { 00:11:56.573 "name": "NewBaseBdev", 00:11:56.573 "aliases": [ 00:11:56.573 "c7677c3e-1bac-4577-8083-eca85da9d9a9" 00:11:56.573 ], 00:11:56.573 "product_name": "Malloc disk", 00:11:56.573 "block_size": 512, 00:11:56.573 "num_blocks": 65536, 00:11:56.573 "uuid": "c7677c3e-1bac-4577-8083-eca85da9d9a9", 00:11:56.573 "assigned_rate_limits": { 00:11:56.573 "rw_ios_per_sec": 0, 00:11:56.573 "rw_mbytes_per_sec": 0, 00:11:56.573 "r_mbytes_per_sec": 0, 00:11:56.573 "w_mbytes_per_sec": 0 00:11:56.573 }, 00:11:56.573 "claimed": true, 00:11:56.573 "claim_type": "exclusive_write", 00:11:56.573 "zoned": false, 00:11:56.573 "supported_io_types": { 00:11:56.573 "read": true, 00:11:56.573 "write": true, 00:11:56.573 "unmap": true, 00:11:56.573 "flush": true, 00:11:56.573 "reset": true, 00:11:56.573 "nvme_admin": false, 00:11:56.573 "nvme_io": false, 00:11:56.573 "nvme_io_md": false, 00:11:56.573 "write_zeroes": true, 00:11:56.573 "zcopy": true, 00:11:56.573 "get_zone_info": false, 00:11:56.573 "zone_management": false, 00:11:56.573 "zone_append": false, 00:11:56.573 "compare": false, 00:11:56.573 "compare_and_write": false, 00:11:56.573 "abort": true, 00:11:56.573 "seek_hole": false, 00:11:56.573 "seek_data": false, 00:11:56.573 "copy": true, 00:11:56.573 "nvme_iov_md": false 00:11:56.573 }, 00:11:56.573 "memory_domains": [ 00:11:56.573 { 00:11:56.573 "dma_device_id": "system", 00:11:56.573 "dma_device_type": 1 00:11:56.573 }, 00:11:56.573 { 00:11:56.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.573 "dma_device_type": 2 00:11:56.573 } 00:11:56.573 ], 00:11:56.573 "driver_specific": {} 00:11:56.573 } 00:11:56.573 ] 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.573 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.574 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.574 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.574 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.574 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.574 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.574 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.574 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.574 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.574 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.574 "name": "Existed_Raid", 00:11:56.574 "uuid": "13b1420b-4c66-4fb5-9583-d30906a56310", 00:11:56.574 "strip_size_kb": 0, 00:11:56.574 "state": "online", 00:11:56.574 "raid_level": "raid1", 00:11:56.574 "superblock": true, 00:11:56.574 "num_base_bdevs": 4, 00:11:56.574 "num_base_bdevs_discovered": 4, 00:11:56.574 "num_base_bdevs_operational": 4, 00:11:56.574 "base_bdevs_list": [ 00:11:56.574 { 00:11:56.574 "name": "NewBaseBdev", 00:11:56.574 "uuid": "c7677c3e-1bac-4577-8083-eca85da9d9a9", 00:11:56.574 "is_configured": true, 00:11:56.574 "data_offset": 2048, 00:11:56.574 "data_size": 63488 00:11:56.574 }, 00:11:56.574 { 00:11:56.574 "name": "BaseBdev2", 00:11:56.574 "uuid": "75a2f463-7a17-406f-839b-749683a2fc34", 00:11:56.574 "is_configured": true, 00:11:56.574 "data_offset": 2048, 00:11:56.574 "data_size": 63488 00:11:56.574 }, 00:11:56.574 { 00:11:56.574 "name": "BaseBdev3", 00:11:56.574 "uuid": "63201b02-4038-4bf9-9317-3836372981f2", 00:11:56.574 "is_configured": true, 00:11:56.574 "data_offset": 2048, 00:11:56.574 "data_size": 63488 00:11:56.574 }, 00:11:56.574 { 00:11:56.574 "name": "BaseBdev4", 00:11:56.574 "uuid": "0921dee9-ebc8-4f80-8b8b-c27f11694f9b", 00:11:56.574 "is_configured": true, 00:11:56.574 "data_offset": 2048, 00:11:56.574 "data_size": 63488 00:11:56.574 } 00:11:56.574 ] 00:11:56.574 }' 00:11:56.574 08:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.574 08:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.143 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:57.143 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:57.143 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.143 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.143 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.143 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.143 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:57.143 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.143 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.143 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.143 [2024-10-05 08:48:33.318398] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.143 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.143 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.143 "name": "Existed_Raid", 00:11:57.143 "aliases": [ 00:11:57.143 "13b1420b-4c66-4fb5-9583-d30906a56310" 00:11:57.143 ], 00:11:57.143 "product_name": "Raid Volume", 00:11:57.143 "block_size": 512, 00:11:57.143 "num_blocks": 63488, 00:11:57.143 "uuid": "13b1420b-4c66-4fb5-9583-d30906a56310", 00:11:57.143 "assigned_rate_limits": { 00:11:57.143 "rw_ios_per_sec": 0, 00:11:57.143 "rw_mbytes_per_sec": 0, 00:11:57.143 "r_mbytes_per_sec": 0, 00:11:57.143 "w_mbytes_per_sec": 0 00:11:57.143 }, 00:11:57.143 "claimed": false, 00:11:57.143 "zoned": false, 00:11:57.143 "supported_io_types": { 00:11:57.143 "read": true, 00:11:57.143 "write": true, 00:11:57.143 "unmap": false, 00:11:57.143 "flush": false, 00:11:57.143 "reset": true, 00:11:57.143 "nvme_admin": false, 00:11:57.143 "nvme_io": false, 00:11:57.143 "nvme_io_md": false, 00:11:57.143 "write_zeroes": true, 00:11:57.143 "zcopy": false, 00:11:57.143 "get_zone_info": false, 00:11:57.143 "zone_management": false, 00:11:57.143 "zone_append": false, 00:11:57.143 "compare": false, 00:11:57.143 "compare_and_write": false, 00:11:57.143 "abort": false, 00:11:57.143 "seek_hole": false, 00:11:57.143 "seek_data": false, 00:11:57.143 "copy": false, 00:11:57.143 "nvme_iov_md": false 00:11:57.143 }, 00:11:57.143 "memory_domains": [ 00:11:57.143 { 00:11:57.143 "dma_device_id": "system", 00:11:57.143 "dma_device_type": 1 00:11:57.144 }, 00:11:57.144 { 00:11:57.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.144 "dma_device_type": 2 00:11:57.144 }, 00:11:57.144 { 00:11:57.144 "dma_device_id": "system", 00:11:57.144 "dma_device_type": 1 00:11:57.144 }, 00:11:57.144 { 00:11:57.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.144 "dma_device_type": 2 00:11:57.144 }, 00:11:57.144 { 00:11:57.144 "dma_device_id": "system", 00:11:57.144 "dma_device_type": 1 00:11:57.144 }, 00:11:57.144 { 00:11:57.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.144 "dma_device_type": 2 00:11:57.144 }, 00:11:57.144 { 00:11:57.144 "dma_device_id": "system", 00:11:57.144 "dma_device_type": 1 00:11:57.144 }, 00:11:57.144 { 00:11:57.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.144 "dma_device_type": 2 00:11:57.144 } 00:11:57.144 ], 00:11:57.144 "driver_specific": { 00:11:57.144 "raid": { 00:11:57.144 "uuid": "13b1420b-4c66-4fb5-9583-d30906a56310", 00:11:57.144 "strip_size_kb": 0, 00:11:57.144 "state": "online", 00:11:57.144 "raid_level": "raid1", 00:11:57.144 "superblock": true, 00:11:57.144 "num_base_bdevs": 4, 00:11:57.144 "num_base_bdevs_discovered": 4, 00:11:57.144 "num_base_bdevs_operational": 4, 00:11:57.144 "base_bdevs_list": [ 00:11:57.144 { 00:11:57.144 "name": "NewBaseBdev", 00:11:57.144 "uuid": "c7677c3e-1bac-4577-8083-eca85da9d9a9", 00:11:57.144 "is_configured": true, 00:11:57.144 "data_offset": 2048, 00:11:57.144 "data_size": 63488 00:11:57.144 }, 00:11:57.144 { 00:11:57.144 "name": "BaseBdev2", 00:11:57.144 "uuid": "75a2f463-7a17-406f-839b-749683a2fc34", 00:11:57.144 "is_configured": true, 00:11:57.144 "data_offset": 2048, 00:11:57.144 "data_size": 63488 00:11:57.144 }, 00:11:57.144 { 00:11:57.144 "name": "BaseBdev3", 00:11:57.144 "uuid": "63201b02-4038-4bf9-9317-3836372981f2", 00:11:57.144 "is_configured": true, 00:11:57.144 "data_offset": 2048, 00:11:57.144 "data_size": 63488 00:11:57.144 }, 00:11:57.144 { 00:11:57.144 "name": "BaseBdev4", 00:11:57.144 "uuid": "0921dee9-ebc8-4f80-8b8b-c27f11694f9b", 00:11:57.144 "is_configured": true, 00:11:57.144 "data_offset": 2048, 00:11:57.144 "data_size": 63488 00:11:57.144 } 00:11:57.144 ] 00:11:57.144 } 00:11:57.144 } 00:11:57.144 }' 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:57.144 BaseBdev2 00:11:57.144 BaseBdev3 00:11:57.144 BaseBdev4' 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.144 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.405 [2024-10-05 08:48:33.641531] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:57.405 [2024-10-05 08:48:33.641557] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.405 [2024-10-05 08:48:33.641634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.405 [2024-10-05 08:48:33.641936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.405 [2024-10-05 08:48:33.641949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72176 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72176 ']' 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72176 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72176 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72176' 00:11:57.405 killing process with pid 72176 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72176 00:11:57.405 [2024-10-05 08:48:33.690726] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.405 08:48:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72176 00:11:57.665 [2024-10-05 08:48:34.097793] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.067 08:48:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:59.067 00:11:59.067 real 0m11.578s 00:11:59.067 user 0m17.913s 00:11:59.067 sys 0m2.261s 00:11:59.067 08:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.067 ************************************ 00:11:59.067 END TEST raid_state_function_test_sb 00:11:59.067 ************************************ 00:11:59.067 08:48:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.067 08:48:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:59.067 08:48:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:59.067 08:48:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.067 08:48:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:59.067 ************************************ 00:11:59.067 START TEST raid_superblock_test 00:11:59.067 ************************************ 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72779 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72779 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72779 ']' 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:59.067 08:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.326 [2024-10-05 08:48:35.596490] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:11:59.326 [2024-10-05 08:48:35.596666] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72779 ] 00:11:59.326 [2024-10-05 08:48:35.760042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.586 [2024-10-05 08:48:36.005404] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.846 [2024-10-05 08:48:36.233342] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.847 [2024-10-05 08:48:36.233480] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.106 malloc1 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.106 [2024-10-05 08:48:36.478223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:00.106 [2024-10-05 08:48:36.478368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.106 [2024-10-05 08:48:36.478410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:00.106 [2024-10-05 08:48:36.478442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.106 [2024-10-05 08:48:36.480801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.106 [2024-10-05 08:48:36.480889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:00.106 pt1 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.106 malloc2 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.106 [2024-10-05 08:48:36.555894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:00.106 [2024-10-05 08:48:36.555978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.106 [2024-10-05 08:48:36.556001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:00.106 [2024-10-05 08:48:36.556010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.106 [2024-10-05 08:48:36.558371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.106 [2024-10-05 08:48:36.558470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:00.106 pt2 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.106 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.366 malloc3 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.366 [2024-10-05 08:48:36.614888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:00.366 [2024-10-05 08:48:36.615039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.366 [2024-10-05 08:48:36.615079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:00.366 [2024-10-05 08:48:36.615119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.366 [2024-10-05 08:48:36.617461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.366 [2024-10-05 08:48:36.617535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:00.366 pt3 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.366 malloc4 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.366 [2024-10-05 08:48:36.679442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:00.366 [2024-10-05 08:48:36.679574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.366 [2024-10-05 08:48:36.679616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:00.366 [2024-10-05 08:48:36.679645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.366 [2024-10-05 08:48:36.682087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.366 [2024-10-05 08:48:36.682160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:00.366 pt4 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.366 [2024-10-05 08:48:36.691491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:00.366 [2024-10-05 08:48:36.693661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:00.366 [2024-10-05 08:48:36.693786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:00.366 [2024-10-05 08:48:36.693849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:00.366 [2024-10-05 08:48:36.694091] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:00.366 [2024-10-05 08:48:36.694140] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:00.366 [2024-10-05 08:48:36.694448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:00.366 [2024-10-05 08:48:36.694659] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:00.366 [2024-10-05 08:48:36.694708] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:00.366 [2024-10-05 08:48:36.694899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.366 "name": "raid_bdev1", 00:12:00.366 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:00.366 "strip_size_kb": 0, 00:12:00.366 "state": "online", 00:12:00.366 "raid_level": "raid1", 00:12:00.366 "superblock": true, 00:12:00.366 "num_base_bdevs": 4, 00:12:00.366 "num_base_bdevs_discovered": 4, 00:12:00.366 "num_base_bdevs_operational": 4, 00:12:00.366 "base_bdevs_list": [ 00:12:00.366 { 00:12:00.366 "name": "pt1", 00:12:00.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:00.366 "is_configured": true, 00:12:00.366 "data_offset": 2048, 00:12:00.366 "data_size": 63488 00:12:00.366 }, 00:12:00.366 { 00:12:00.366 "name": "pt2", 00:12:00.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.366 "is_configured": true, 00:12:00.366 "data_offset": 2048, 00:12:00.366 "data_size": 63488 00:12:00.366 }, 00:12:00.366 { 00:12:00.366 "name": "pt3", 00:12:00.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.366 "is_configured": true, 00:12:00.366 "data_offset": 2048, 00:12:00.366 "data_size": 63488 00:12:00.366 }, 00:12:00.366 { 00:12:00.366 "name": "pt4", 00:12:00.366 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.366 "is_configured": true, 00:12:00.366 "data_offset": 2048, 00:12:00.366 "data_size": 63488 00:12:00.366 } 00:12:00.366 ] 00:12:00.366 }' 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.366 08:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.935 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:00.935 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:00.935 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:00.935 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:00.935 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:00.935 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:00.935 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:00.935 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:00.935 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.935 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.935 [2024-10-05 08:48:37.127033] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.935 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.935 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:00.935 "name": "raid_bdev1", 00:12:00.935 "aliases": [ 00:12:00.935 "81ad4c7f-9878-431a-8ec9-71f229d326d3" 00:12:00.935 ], 00:12:00.935 "product_name": "Raid Volume", 00:12:00.935 "block_size": 512, 00:12:00.935 "num_blocks": 63488, 00:12:00.935 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:00.935 "assigned_rate_limits": { 00:12:00.935 "rw_ios_per_sec": 0, 00:12:00.935 "rw_mbytes_per_sec": 0, 00:12:00.935 "r_mbytes_per_sec": 0, 00:12:00.935 "w_mbytes_per_sec": 0 00:12:00.935 }, 00:12:00.935 "claimed": false, 00:12:00.935 "zoned": false, 00:12:00.935 "supported_io_types": { 00:12:00.935 "read": true, 00:12:00.935 "write": true, 00:12:00.935 "unmap": false, 00:12:00.935 "flush": false, 00:12:00.935 "reset": true, 00:12:00.935 "nvme_admin": false, 00:12:00.935 "nvme_io": false, 00:12:00.935 "nvme_io_md": false, 00:12:00.935 "write_zeroes": true, 00:12:00.935 "zcopy": false, 00:12:00.935 "get_zone_info": false, 00:12:00.935 "zone_management": false, 00:12:00.935 "zone_append": false, 00:12:00.935 "compare": false, 00:12:00.936 "compare_and_write": false, 00:12:00.936 "abort": false, 00:12:00.936 "seek_hole": false, 00:12:00.936 "seek_data": false, 00:12:00.936 "copy": false, 00:12:00.936 "nvme_iov_md": false 00:12:00.936 }, 00:12:00.936 "memory_domains": [ 00:12:00.936 { 00:12:00.936 "dma_device_id": "system", 00:12:00.936 "dma_device_type": 1 00:12:00.936 }, 00:12:00.936 { 00:12:00.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.936 "dma_device_type": 2 00:12:00.936 }, 00:12:00.936 { 00:12:00.936 "dma_device_id": "system", 00:12:00.936 "dma_device_type": 1 00:12:00.936 }, 00:12:00.936 { 00:12:00.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.936 "dma_device_type": 2 00:12:00.936 }, 00:12:00.936 { 00:12:00.936 "dma_device_id": "system", 00:12:00.936 "dma_device_type": 1 00:12:00.936 }, 00:12:00.936 { 00:12:00.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.936 "dma_device_type": 2 00:12:00.936 }, 00:12:00.936 { 00:12:00.936 "dma_device_id": "system", 00:12:00.936 "dma_device_type": 1 00:12:00.936 }, 00:12:00.936 { 00:12:00.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.936 "dma_device_type": 2 00:12:00.936 } 00:12:00.936 ], 00:12:00.936 "driver_specific": { 00:12:00.936 "raid": { 00:12:00.936 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:00.936 "strip_size_kb": 0, 00:12:00.936 "state": "online", 00:12:00.936 "raid_level": "raid1", 00:12:00.936 "superblock": true, 00:12:00.936 "num_base_bdevs": 4, 00:12:00.936 "num_base_bdevs_discovered": 4, 00:12:00.936 "num_base_bdevs_operational": 4, 00:12:00.936 "base_bdevs_list": [ 00:12:00.936 { 00:12:00.936 "name": "pt1", 00:12:00.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:00.936 "is_configured": true, 00:12:00.936 "data_offset": 2048, 00:12:00.936 "data_size": 63488 00:12:00.936 }, 00:12:00.936 { 00:12:00.936 "name": "pt2", 00:12:00.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.936 "is_configured": true, 00:12:00.936 "data_offset": 2048, 00:12:00.936 "data_size": 63488 00:12:00.936 }, 00:12:00.936 { 00:12:00.936 "name": "pt3", 00:12:00.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.936 "is_configured": true, 00:12:00.936 "data_offset": 2048, 00:12:00.936 "data_size": 63488 00:12:00.936 }, 00:12:00.936 { 00:12:00.936 "name": "pt4", 00:12:00.936 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.936 "is_configured": true, 00:12:00.936 "data_offset": 2048, 00:12:00.936 "data_size": 63488 00:12:00.936 } 00:12:00.936 ] 00:12:00.936 } 00:12:00.936 } 00:12:00.936 }' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:00.936 pt2 00:12:00.936 pt3 00:12:00.936 pt4' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:00.936 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.197 [2024-10-05 08:48:37.414421] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=81ad4c7f-9878-431a-8ec9-71f229d326d3 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 81ad4c7f-9878-431a-8ec9-71f229d326d3 ']' 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.197 [2024-10-05 08:48:37.462114] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:01.197 [2024-10-05 08:48:37.462149] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:01.197 [2024-10-05 08:48:37.462247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.197 [2024-10-05 08:48:37.462342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.197 [2024-10-05 08:48:37.462359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:01.197 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.198 [2024-10-05 08:48:37.605881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:01.198 [2024-10-05 08:48:37.608093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:01.198 [2024-10-05 08:48:37.608151] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:01.198 [2024-10-05 08:48:37.608184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:01.198 [2024-10-05 08:48:37.608235] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:01.198 [2024-10-05 08:48:37.608286] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:01.198 [2024-10-05 08:48:37.608306] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:01.198 [2024-10-05 08:48:37.608325] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:01.198 [2024-10-05 08:48:37.608338] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:01.198 [2024-10-05 08:48:37.608350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:01.198 request: 00:12:01.198 { 00:12:01.198 "name": "raid_bdev1", 00:12:01.198 "raid_level": "raid1", 00:12:01.198 "base_bdevs": [ 00:12:01.198 "malloc1", 00:12:01.198 "malloc2", 00:12:01.198 "malloc3", 00:12:01.198 "malloc4" 00:12:01.198 ], 00:12:01.198 "superblock": false, 00:12:01.198 "method": "bdev_raid_create", 00:12:01.198 "req_id": 1 00:12:01.198 } 00:12:01.198 Got JSON-RPC error response 00:12:01.198 response: 00:12:01.198 { 00:12:01.198 "code": -17, 00:12:01.198 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:01.198 } 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.198 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.458 [2024-10-05 08:48:37.669722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:01.459 [2024-10-05 08:48:37.669796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.459 [2024-10-05 08:48:37.669815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:01.459 [2024-10-05 08:48:37.669827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.459 [2024-10-05 08:48:37.672288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.459 [2024-10-05 08:48:37.672328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:01.459 [2024-10-05 08:48:37.672408] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:01.459 [2024-10-05 08:48:37.672463] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:01.459 pt1 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.459 "name": "raid_bdev1", 00:12:01.459 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:01.459 "strip_size_kb": 0, 00:12:01.459 "state": "configuring", 00:12:01.459 "raid_level": "raid1", 00:12:01.459 "superblock": true, 00:12:01.459 "num_base_bdevs": 4, 00:12:01.459 "num_base_bdevs_discovered": 1, 00:12:01.459 "num_base_bdevs_operational": 4, 00:12:01.459 "base_bdevs_list": [ 00:12:01.459 { 00:12:01.459 "name": "pt1", 00:12:01.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:01.459 "is_configured": true, 00:12:01.459 "data_offset": 2048, 00:12:01.459 "data_size": 63488 00:12:01.459 }, 00:12:01.459 { 00:12:01.459 "name": null, 00:12:01.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.459 "is_configured": false, 00:12:01.459 "data_offset": 2048, 00:12:01.459 "data_size": 63488 00:12:01.459 }, 00:12:01.459 { 00:12:01.459 "name": null, 00:12:01.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.459 "is_configured": false, 00:12:01.459 "data_offset": 2048, 00:12:01.459 "data_size": 63488 00:12:01.459 }, 00:12:01.459 { 00:12:01.459 "name": null, 00:12:01.459 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:01.459 "is_configured": false, 00:12:01.459 "data_offset": 2048, 00:12:01.459 "data_size": 63488 00:12:01.459 } 00:12:01.459 ] 00:12:01.459 }' 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.459 08:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.719 [2024-10-05 08:48:38.128956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:01.719 [2024-10-05 08:48:38.129026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.719 [2024-10-05 08:48:38.129046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:01.719 [2024-10-05 08:48:38.129058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.719 [2024-10-05 08:48:38.129549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.719 [2024-10-05 08:48:38.129582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:01.719 [2024-10-05 08:48:38.129667] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:01.719 [2024-10-05 08:48:38.129710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:01.719 pt2 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.719 [2024-10-05 08:48:38.136973] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.719 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.979 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.979 "name": "raid_bdev1", 00:12:01.979 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:01.979 "strip_size_kb": 0, 00:12:01.979 "state": "configuring", 00:12:01.979 "raid_level": "raid1", 00:12:01.979 "superblock": true, 00:12:01.979 "num_base_bdevs": 4, 00:12:01.979 "num_base_bdevs_discovered": 1, 00:12:01.979 "num_base_bdevs_operational": 4, 00:12:01.979 "base_bdevs_list": [ 00:12:01.979 { 00:12:01.979 "name": "pt1", 00:12:01.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:01.979 "is_configured": true, 00:12:01.979 "data_offset": 2048, 00:12:01.979 "data_size": 63488 00:12:01.979 }, 00:12:01.979 { 00:12:01.979 "name": null, 00:12:01.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.979 "is_configured": false, 00:12:01.979 "data_offset": 0, 00:12:01.979 "data_size": 63488 00:12:01.979 }, 00:12:01.979 { 00:12:01.979 "name": null, 00:12:01.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.979 "is_configured": false, 00:12:01.979 "data_offset": 2048, 00:12:01.979 "data_size": 63488 00:12:01.979 }, 00:12:01.979 { 00:12:01.979 "name": null, 00:12:01.979 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:01.979 "is_configured": false, 00:12:01.979 "data_offset": 2048, 00:12:01.979 "data_size": 63488 00:12:01.979 } 00:12:01.979 ] 00:12:01.979 }' 00:12:01.979 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.979 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.238 [2024-10-05 08:48:38.568205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:02.238 [2024-10-05 08:48:38.568269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.238 [2024-10-05 08:48:38.568299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:02.238 [2024-10-05 08:48:38.568312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.238 [2024-10-05 08:48:38.568837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.238 [2024-10-05 08:48:38.568866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:02.238 [2024-10-05 08:48:38.568972] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:02.238 [2024-10-05 08:48:38.569005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:02.238 pt2 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.238 [2024-10-05 08:48:38.576174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:02.238 [2024-10-05 08:48:38.576221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.238 [2024-10-05 08:48:38.576239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:02.238 [2024-10-05 08:48:38.576248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.238 [2024-10-05 08:48:38.576654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.238 [2024-10-05 08:48:38.576681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:02.238 [2024-10-05 08:48:38.576745] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:02.238 [2024-10-05 08:48:38.576768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:02.238 pt3 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.238 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.238 [2024-10-05 08:48:38.584124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:02.238 [2024-10-05 08:48:38.584164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.238 [2024-10-05 08:48:38.584180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:02.238 [2024-10-05 08:48:38.584188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.238 [2024-10-05 08:48:38.584569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.238 [2024-10-05 08:48:38.584593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:02.238 [2024-10-05 08:48:38.584650] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:02.239 [2024-10-05 08:48:38.584675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:02.239 [2024-10-05 08:48:38.584823] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:02.239 [2024-10-05 08:48:38.584831] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:02.239 [2024-10-05 08:48:38.585095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:02.239 [2024-10-05 08:48:38.585279] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:02.239 [2024-10-05 08:48:38.585297] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:02.239 [2024-10-05 08:48:38.585423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.239 pt4 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.239 "name": "raid_bdev1", 00:12:02.239 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:02.239 "strip_size_kb": 0, 00:12:02.239 "state": "online", 00:12:02.239 "raid_level": "raid1", 00:12:02.239 "superblock": true, 00:12:02.239 "num_base_bdevs": 4, 00:12:02.239 "num_base_bdevs_discovered": 4, 00:12:02.239 "num_base_bdevs_operational": 4, 00:12:02.239 "base_bdevs_list": [ 00:12:02.239 { 00:12:02.239 "name": "pt1", 00:12:02.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:02.239 "is_configured": true, 00:12:02.239 "data_offset": 2048, 00:12:02.239 "data_size": 63488 00:12:02.239 }, 00:12:02.239 { 00:12:02.239 "name": "pt2", 00:12:02.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.239 "is_configured": true, 00:12:02.239 "data_offset": 2048, 00:12:02.239 "data_size": 63488 00:12:02.239 }, 00:12:02.239 { 00:12:02.239 "name": "pt3", 00:12:02.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.239 "is_configured": true, 00:12:02.239 "data_offset": 2048, 00:12:02.239 "data_size": 63488 00:12:02.239 }, 00:12:02.239 { 00:12:02.239 "name": "pt4", 00:12:02.239 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:02.239 "is_configured": true, 00:12:02.239 "data_offset": 2048, 00:12:02.239 "data_size": 63488 00:12:02.239 } 00:12:02.239 ] 00:12:02.239 }' 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.239 08:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.809 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:02.809 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:02.809 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:02.809 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:02.809 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:02.809 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:02.809 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:02.809 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:02.809 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.809 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.809 [2024-10-05 08:48:39.063608] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.809 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.809 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:02.809 "name": "raid_bdev1", 00:12:02.809 "aliases": [ 00:12:02.809 "81ad4c7f-9878-431a-8ec9-71f229d326d3" 00:12:02.809 ], 00:12:02.809 "product_name": "Raid Volume", 00:12:02.809 "block_size": 512, 00:12:02.809 "num_blocks": 63488, 00:12:02.809 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:02.809 "assigned_rate_limits": { 00:12:02.809 "rw_ios_per_sec": 0, 00:12:02.809 "rw_mbytes_per_sec": 0, 00:12:02.809 "r_mbytes_per_sec": 0, 00:12:02.809 "w_mbytes_per_sec": 0 00:12:02.809 }, 00:12:02.809 "claimed": false, 00:12:02.809 "zoned": false, 00:12:02.809 "supported_io_types": { 00:12:02.809 "read": true, 00:12:02.809 "write": true, 00:12:02.809 "unmap": false, 00:12:02.809 "flush": false, 00:12:02.809 "reset": true, 00:12:02.809 "nvme_admin": false, 00:12:02.809 "nvme_io": false, 00:12:02.809 "nvme_io_md": false, 00:12:02.809 "write_zeroes": true, 00:12:02.809 "zcopy": false, 00:12:02.809 "get_zone_info": false, 00:12:02.809 "zone_management": false, 00:12:02.809 "zone_append": false, 00:12:02.809 "compare": false, 00:12:02.809 "compare_and_write": false, 00:12:02.809 "abort": false, 00:12:02.809 "seek_hole": false, 00:12:02.809 "seek_data": false, 00:12:02.809 "copy": false, 00:12:02.809 "nvme_iov_md": false 00:12:02.809 }, 00:12:02.809 "memory_domains": [ 00:12:02.809 { 00:12:02.810 "dma_device_id": "system", 00:12:02.810 "dma_device_type": 1 00:12:02.810 }, 00:12:02.810 { 00:12:02.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.810 "dma_device_type": 2 00:12:02.810 }, 00:12:02.810 { 00:12:02.810 "dma_device_id": "system", 00:12:02.810 "dma_device_type": 1 00:12:02.810 }, 00:12:02.810 { 00:12:02.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.810 "dma_device_type": 2 00:12:02.810 }, 00:12:02.810 { 00:12:02.810 "dma_device_id": "system", 00:12:02.810 "dma_device_type": 1 00:12:02.810 }, 00:12:02.810 { 00:12:02.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.810 "dma_device_type": 2 00:12:02.810 }, 00:12:02.810 { 00:12:02.810 "dma_device_id": "system", 00:12:02.810 "dma_device_type": 1 00:12:02.810 }, 00:12:02.810 { 00:12:02.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.810 "dma_device_type": 2 00:12:02.810 } 00:12:02.810 ], 00:12:02.810 "driver_specific": { 00:12:02.810 "raid": { 00:12:02.810 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:02.810 "strip_size_kb": 0, 00:12:02.810 "state": "online", 00:12:02.810 "raid_level": "raid1", 00:12:02.810 "superblock": true, 00:12:02.810 "num_base_bdevs": 4, 00:12:02.810 "num_base_bdevs_discovered": 4, 00:12:02.810 "num_base_bdevs_operational": 4, 00:12:02.810 "base_bdevs_list": [ 00:12:02.810 { 00:12:02.810 "name": "pt1", 00:12:02.810 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:02.810 "is_configured": true, 00:12:02.810 "data_offset": 2048, 00:12:02.810 "data_size": 63488 00:12:02.810 }, 00:12:02.810 { 00:12:02.810 "name": "pt2", 00:12:02.810 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:02.810 "is_configured": true, 00:12:02.810 "data_offset": 2048, 00:12:02.810 "data_size": 63488 00:12:02.810 }, 00:12:02.810 { 00:12:02.810 "name": "pt3", 00:12:02.810 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:02.810 "is_configured": true, 00:12:02.810 "data_offset": 2048, 00:12:02.810 "data_size": 63488 00:12:02.810 }, 00:12:02.810 { 00:12:02.810 "name": "pt4", 00:12:02.810 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:02.810 "is_configured": true, 00:12:02.810 "data_offset": 2048, 00:12:02.810 "data_size": 63488 00:12:02.810 } 00:12:02.810 ] 00:12:02.810 } 00:12:02.810 } 00:12:02.810 }' 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:02.810 pt2 00:12:02.810 pt3 00:12:02.810 pt4' 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.810 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:03.070 [2024-10-05 08:48:39.311148] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 81ad4c7f-9878-431a-8ec9-71f229d326d3 '!=' 81ad4c7f-9878-431a-8ec9-71f229d326d3 ']' 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.070 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.071 [2024-10-05 08:48:39.358825] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.071 "name": "raid_bdev1", 00:12:03.071 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:03.071 "strip_size_kb": 0, 00:12:03.071 "state": "online", 00:12:03.071 "raid_level": "raid1", 00:12:03.071 "superblock": true, 00:12:03.071 "num_base_bdevs": 4, 00:12:03.071 "num_base_bdevs_discovered": 3, 00:12:03.071 "num_base_bdevs_operational": 3, 00:12:03.071 "base_bdevs_list": [ 00:12:03.071 { 00:12:03.071 "name": null, 00:12:03.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.071 "is_configured": false, 00:12:03.071 "data_offset": 0, 00:12:03.071 "data_size": 63488 00:12:03.071 }, 00:12:03.071 { 00:12:03.071 "name": "pt2", 00:12:03.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.071 "is_configured": true, 00:12:03.071 "data_offset": 2048, 00:12:03.071 "data_size": 63488 00:12:03.071 }, 00:12:03.071 { 00:12:03.071 "name": "pt3", 00:12:03.071 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.071 "is_configured": true, 00:12:03.071 "data_offset": 2048, 00:12:03.071 "data_size": 63488 00:12:03.071 }, 00:12:03.071 { 00:12:03.071 "name": "pt4", 00:12:03.071 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:03.071 "is_configured": true, 00:12:03.071 "data_offset": 2048, 00:12:03.071 "data_size": 63488 00:12:03.071 } 00:12:03.071 ] 00:12:03.071 }' 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.071 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.332 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:03.332 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.332 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.332 [2024-10-05 08:48:39.782061] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.332 [2024-10-05 08:48:39.782094] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.332 [2024-10-05 08:48:39.782163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.332 [2024-10-05 08:48:39.782244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.332 [2024-10-05 08:48:39.782253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:03.332 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.332 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.332 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.332 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.332 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:03.332 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.592 [2024-10-05 08:48:39.877922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:03.592 [2024-10-05 08:48:39.877978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.592 [2024-10-05 08:48:39.877999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:03.592 [2024-10-05 08:48:39.878008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.592 [2024-10-05 08:48:39.880420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.592 [2024-10-05 08:48:39.880453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:03.592 [2024-10-05 08:48:39.880545] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:03.592 [2024-10-05 08:48:39.880588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:03.592 pt2 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.592 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.593 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.593 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.593 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.593 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.593 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.593 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.593 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.593 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.593 "name": "raid_bdev1", 00:12:03.593 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:03.593 "strip_size_kb": 0, 00:12:03.593 "state": "configuring", 00:12:03.593 "raid_level": "raid1", 00:12:03.593 "superblock": true, 00:12:03.593 "num_base_bdevs": 4, 00:12:03.593 "num_base_bdevs_discovered": 1, 00:12:03.593 "num_base_bdevs_operational": 3, 00:12:03.593 "base_bdevs_list": [ 00:12:03.593 { 00:12:03.593 "name": null, 00:12:03.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.593 "is_configured": false, 00:12:03.593 "data_offset": 2048, 00:12:03.593 "data_size": 63488 00:12:03.593 }, 00:12:03.593 { 00:12:03.593 "name": "pt2", 00:12:03.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:03.593 "is_configured": true, 00:12:03.593 "data_offset": 2048, 00:12:03.593 "data_size": 63488 00:12:03.593 }, 00:12:03.593 { 00:12:03.593 "name": null, 00:12:03.593 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:03.593 "is_configured": false, 00:12:03.593 "data_offset": 2048, 00:12:03.593 "data_size": 63488 00:12:03.593 }, 00:12:03.593 { 00:12:03.593 "name": null, 00:12:03.593 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:03.593 "is_configured": false, 00:12:03.593 "data_offset": 2048, 00:12:03.593 "data_size": 63488 00:12:03.593 } 00:12:03.593 ] 00:12:03.593 }' 00:12:03.593 08:48:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.593 08:48:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.164 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:04.164 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:04.164 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:04.164 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.164 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.164 [2024-10-05 08:48:40.357092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:04.164 [2024-10-05 08:48:40.357155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.164 [2024-10-05 08:48:40.357175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:04.165 [2024-10-05 08:48:40.357184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.165 [2024-10-05 08:48:40.357601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.165 [2024-10-05 08:48:40.357625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:04.165 [2024-10-05 08:48:40.357700] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:04.165 [2024-10-05 08:48:40.357727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:04.165 pt3 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.165 "name": "raid_bdev1", 00:12:04.165 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:04.165 "strip_size_kb": 0, 00:12:04.165 "state": "configuring", 00:12:04.165 "raid_level": "raid1", 00:12:04.165 "superblock": true, 00:12:04.165 "num_base_bdevs": 4, 00:12:04.165 "num_base_bdevs_discovered": 2, 00:12:04.165 "num_base_bdevs_operational": 3, 00:12:04.165 "base_bdevs_list": [ 00:12:04.165 { 00:12:04.165 "name": null, 00:12:04.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.165 "is_configured": false, 00:12:04.165 "data_offset": 2048, 00:12:04.165 "data_size": 63488 00:12:04.165 }, 00:12:04.165 { 00:12:04.165 "name": "pt2", 00:12:04.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.165 "is_configured": true, 00:12:04.165 "data_offset": 2048, 00:12:04.165 "data_size": 63488 00:12:04.165 }, 00:12:04.165 { 00:12:04.165 "name": "pt3", 00:12:04.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.165 "is_configured": true, 00:12:04.165 "data_offset": 2048, 00:12:04.165 "data_size": 63488 00:12:04.165 }, 00:12:04.165 { 00:12:04.165 "name": null, 00:12:04.165 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:04.165 "is_configured": false, 00:12:04.165 "data_offset": 2048, 00:12:04.165 "data_size": 63488 00:12:04.165 } 00:12:04.165 ] 00:12:04.165 }' 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.165 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.426 [2024-10-05 08:48:40.804372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:04.426 [2024-10-05 08:48:40.804428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.426 [2024-10-05 08:48:40.804449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:04.426 [2024-10-05 08:48:40.804458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.426 [2024-10-05 08:48:40.804922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.426 [2024-10-05 08:48:40.804947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:04.426 [2024-10-05 08:48:40.805033] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:04.426 [2024-10-05 08:48:40.805061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:04.426 [2024-10-05 08:48:40.805192] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:04.426 [2024-10-05 08:48:40.805199] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:04.426 [2024-10-05 08:48:40.805449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:04.426 [2024-10-05 08:48:40.805610] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:04.426 [2024-10-05 08:48:40.805627] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:04.426 [2024-10-05 08:48:40.805764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.426 pt4 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.426 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.426 "name": "raid_bdev1", 00:12:04.426 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:04.426 "strip_size_kb": 0, 00:12:04.426 "state": "online", 00:12:04.426 "raid_level": "raid1", 00:12:04.426 "superblock": true, 00:12:04.426 "num_base_bdevs": 4, 00:12:04.426 "num_base_bdevs_discovered": 3, 00:12:04.426 "num_base_bdevs_operational": 3, 00:12:04.426 "base_bdevs_list": [ 00:12:04.426 { 00:12:04.426 "name": null, 00:12:04.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.426 "is_configured": false, 00:12:04.426 "data_offset": 2048, 00:12:04.426 "data_size": 63488 00:12:04.426 }, 00:12:04.426 { 00:12:04.426 "name": "pt2", 00:12:04.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.426 "is_configured": true, 00:12:04.426 "data_offset": 2048, 00:12:04.426 "data_size": 63488 00:12:04.426 }, 00:12:04.426 { 00:12:04.426 "name": "pt3", 00:12:04.426 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.426 "is_configured": true, 00:12:04.426 "data_offset": 2048, 00:12:04.426 "data_size": 63488 00:12:04.426 }, 00:12:04.426 { 00:12:04.426 "name": "pt4", 00:12:04.426 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:04.426 "is_configured": true, 00:12:04.427 "data_offset": 2048, 00:12:04.427 "data_size": 63488 00:12:04.427 } 00:12:04.427 ] 00:12:04.427 }' 00:12:04.427 08:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.427 08:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.998 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:04.998 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.998 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.998 [2024-10-05 08:48:41.263598] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.998 [2024-10-05 08:48:41.263634] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:04.998 [2024-10-05 08:48:41.263729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.998 [2024-10-05 08:48:41.263812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.999 [2024-10-05 08:48:41.263827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.999 [2024-10-05 08:48:41.323483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:04.999 [2024-10-05 08:48:41.323547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.999 [2024-10-05 08:48:41.323567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:04.999 [2024-10-05 08:48:41.323579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.999 [2024-10-05 08:48:41.326139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.999 [2024-10-05 08:48:41.326179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:04.999 [2024-10-05 08:48:41.326275] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:04.999 [2024-10-05 08:48:41.326329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:04.999 [2024-10-05 08:48:41.326458] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:04.999 [2024-10-05 08:48:41.326474] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:04.999 [2024-10-05 08:48:41.326490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:04.999 [2024-10-05 08:48:41.326563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:04.999 [2024-10-05 08:48:41.326669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:04.999 pt1 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.999 "name": "raid_bdev1", 00:12:04.999 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:04.999 "strip_size_kb": 0, 00:12:04.999 "state": "configuring", 00:12:04.999 "raid_level": "raid1", 00:12:04.999 "superblock": true, 00:12:04.999 "num_base_bdevs": 4, 00:12:04.999 "num_base_bdevs_discovered": 2, 00:12:04.999 "num_base_bdevs_operational": 3, 00:12:04.999 "base_bdevs_list": [ 00:12:04.999 { 00:12:04.999 "name": null, 00:12:04.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.999 "is_configured": false, 00:12:04.999 "data_offset": 2048, 00:12:04.999 "data_size": 63488 00:12:04.999 }, 00:12:04.999 { 00:12:04.999 "name": "pt2", 00:12:04.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:04.999 "is_configured": true, 00:12:04.999 "data_offset": 2048, 00:12:04.999 "data_size": 63488 00:12:04.999 }, 00:12:04.999 { 00:12:04.999 "name": "pt3", 00:12:04.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:04.999 "is_configured": true, 00:12:04.999 "data_offset": 2048, 00:12:04.999 "data_size": 63488 00:12:04.999 }, 00:12:04.999 { 00:12:04.999 "name": null, 00:12:04.999 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:04.999 "is_configured": false, 00:12:04.999 "data_offset": 2048, 00:12:04.999 "data_size": 63488 00:12:04.999 } 00:12:04.999 ] 00:12:04.999 }' 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.999 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.259 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:05.259 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.259 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:05.259 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.520 [2024-10-05 08:48:41.782744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:05.520 [2024-10-05 08:48:41.782815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.520 [2024-10-05 08:48:41.782841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:05.520 [2024-10-05 08:48:41.782851] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.520 [2024-10-05 08:48:41.783405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.520 [2024-10-05 08:48:41.783432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:05.520 [2024-10-05 08:48:41.783522] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:05.520 [2024-10-05 08:48:41.783548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:05.520 [2024-10-05 08:48:41.783693] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:05.520 [2024-10-05 08:48:41.783705] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:05.520 [2024-10-05 08:48:41.783993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:05.520 [2024-10-05 08:48:41.784147] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:05.520 [2024-10-05 08:48:41.784163] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:05.520 [2024-10-05 08:48:41.784316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.520 pt4 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.520 "name": "raid_bdev1", 00:12:05.520 "uuid": "81ad4c7f-9878-431a-8ec9-71f229d326d3", 00:12:05.520 "strip_size_kb": 0, 00:12:05.520 "state": "online", 00:12:05.520 "raid_level": "raid1", 00:12:05.520 "superblock": true, 00:12:05.520 "num_base_bdevs": 4, 00:12:05.520 "num_base_bdevs_discovered": 3, 00:12:05.520 "num_base_bdevs_operational": 3, 00:12:05.520 "base_bdevs_list": [ 00:12:05.520 { 00:12:05.520 "name": null, 00:12:05.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.520 "is_configured": false, 00:12:05.520 "data_offset": 2048, 00:12:05.520 "data_size": 63488 00:12:05.520 }, 00:12:05.520 { 00:12:05.520 "name": "pt2", 00:12:05.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.520 "is_configured": true, 00:12:05.520 "data_offset": 2048, 00:12:05.520 "data_size": 63488 00:12:05.520 }, 00:12:05.520 { 00:12:05.520 "name": "pt3", 00:12:05.520 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.520 "is_configured": true, 00:12:05.520 "data_offset": 2048, 00:12:05.520 "data_size": 63488 00:12:05.520 }, 00:12:05.520 { 00:12:05.520 "name": "pt4", 00:12:05.520 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.520 "is_configured": true, 00:12:05.520 "data_offset": 2048, 00:12:05.520 "data_size": 63488 00:12:05.520 } 00:12:05.520 ] 00:12:05.520 }' 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.520 08:48:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.780 08:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:05.780 08:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:05.780 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.780 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.780 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.780 08:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:05.780 08:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.780 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.780 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.780 08:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:05.780 [2024-10-05 08:48:42.218230] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.780 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.040 08:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 81ad4c7f-9878-431a-8ec9-71f229d326d3 '!=' 81ad4c7f-9878-431a-8ec9-71f229d326d3 ']' 00:12:06.040 08:48:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72779 00:12:06.040 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72779 ']' 00:12:06.040 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72779 00:12:06.040 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:06.040 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:06.041 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72779 00:12:06.041 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:06.041 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:06.041 killing process with pid 72779 00:12:06.041 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72779' 00:12:06.041 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72779 00:12:06.041 [2024-10-05 08:48:42.305679] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:06.041 [2024-10-05 08:48:42.305781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.041 08:48:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72779 00:12:06.041 [2024-10-05 08:48:42.305859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.041 [2024-10-05 08:48:42.305872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:06.300 [2024-10-05 08:48:42.718946] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.711 08:48:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:07.711 00:12:07.711 real 0m8.519s 00:12:07.711 user 0m13.103s 00:12:07.711 sys 0m1.629s 00:12:07.711 08:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:07.711 08:48:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.711 ************************************ 00:12:07.711 END TEST raid_superblock_test 00:12:07.711 ************************************ 00:12:07.711 08:48:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:07.711 08:48:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:07.711 08:48:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:07.711 08:48:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:07.711 ************************************ 00:12:07.711 START TEST raid_read_error_test 00:12:07.711 ************************************ 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:07.711 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NjkIZqrxVp 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73218 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73218 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73218 ']' 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.712 08:48:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.984 [2024-10-05 08:48:44.211679] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:12:07.984 [2024-10-05 08:48:44.211831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73218 ] 00:12:07.984 [2024-10-05 08:48:44.381378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.243 [2024-10-05 08:48:44.633889] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.501 [2024-10-05 08:48:44.867683] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.501 [2024-10-05 08:48:44.867721] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.760 BaseBdev1_malloc 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.760 true 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.760 [2024-10-05 08:48:45.093996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:08.760 [2024-10-05 08:48:45.094073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.760 [2024-10-05 08:48:45.094091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:08.760 [2024-10-05 08:48:45.094102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.760 [2024-10-05 08:48:45.096279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.760 [2024-10-05 08:48:45.096316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:08.760 BaseBdev1 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.760 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.761 BaseBdev2_malloc 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.761 true 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.761 [2024-10-05 08:48:45.174711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:08.761 [2024-10-05 08:48:45.174768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.761 [2024-10-05 08:48:45.174785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:08.761 [2024-10-05 08:48:45.174796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.761 [2024-10-05 08:48:45.177132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.761 [2024-10-05 08:48:45.177173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:08.761 BaseBdev2 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.761 BaseBdev3_malloc 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.761 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.021 true 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.021 [2024-10-05 08:48:45.248127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:09.021 [2024-10-05 08:48:45.248183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.021 [2024-10-05 08:48:45.248198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:09.021 [2024-10-05 08:48:45.248210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.021 [2024-10-05 08:48:45.250538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.021 [2024-10-05 08:48:45.250578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:09.021 BaseBdev3 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.021 BaseBdev4_malloc 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.021 true 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.021 [2024-10-05 08:48:45.321815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:09.021 [2024-10-05 08:48:45.321880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.021 [2024-10-05 08:48:45.321899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:09.021 [2024-10-05 08:48:45.321911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.021 [2024-10-05 08:48:45.324267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.021 [2024-10-05 08:48:45.324308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:09.021 BaseBdev4 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.021 [2024-10-05 08:48:45.333882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.021 [2024-10-05 08:48:45.336025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.021 [2024-10-05 08:48:45.336104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.021 [2024-10-05 08:48:45.336162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:09.021 [2024-10-05 08:48:45.336386] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:09.021 [2024-10-05 08:48:45.336407] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:09.021 [2024-10-05 08:48:45.336639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:09.021 [2024-10-05 08:48:45.336821] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:09.021 [2024-10-05 08:48:45.336834] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:09.021 [2024-10-05 08:48:45.337001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.021 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.021 "name": "raid_bdev1", 00:12:09.021 "uuid": "ccd218a8-bae8-4f69-abb2-adf382787829", 00:12:09.021 "strip_size_kb": 0, 00:12:09.021 "state": "online", 00:12:09.021 "raid_level": "raid1", 00:12:09.021 "superblock": true, 00:12:09.021 "num_base_bdevs": 4, 00:12:09.021 "num_base_bdevs_discovered": 4, 00:12:09.021 "num_base_bdevs_operational": 4, 00:12:09.022 "base_bdevs_list": [ 00:12:09.022 { 00:12:09.022 "name": "BaseBdev1", 00:12:09.022 "uuid": "19c41870-ba13-5604-bbf3-4d4fa30a87a1", 00:12:09.022 "is_configured": true, 00:12:09.022 "data_offset": 2048, 00:12:09.022 "data_size": 63488 00:12:09.022 }, 00:12:09.022 { 00:12:09.022 "name": "BaseBdev2", 00:12:09.022 "uuid": "6483f421-215d-5912-8da1-6f8e9739b86d", 00:12:09.022 "is_configured": true, 00:12:09.022 "data_offset": 2048, 00:12:09.022 "data_size": 63488 00:12:09.022 }, 00:12:09.022 { 00:12:09.022 "name": "BaseBdev3", 00:12:09.022 "uuid": "d201e662-2027-5d77-b185-011cb3916c4b", 00:12:09.022 "is_configured": true, 00:12:09.022 "data_offset": 2048, 00:12:09.022 "data_size": 63488 00:12:09.022 }, 00:12:09.022 { 00:12:09.022 "name": "BaseBdev4", 00:12:09.022 "uuid": "163cbfb1-8cb2-5e0c-bcb6-813d80d257f6", 00:12:09.022 "is_configured": true, 00:12:09.022 "data_offset": 2048, 00:12:09.022 "data_size": 63488 00:12:09.022 } 00:12:09.022 ] 00:12:09.022 }' 00:12:09.022 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.022 08:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.590 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:09.590 08:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:09.590 [2024-10-05 08:48:45.874410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:10.528 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:10.528 08:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.528 08:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.528 08:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.528 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.529 "name": "raid_bdev1", 00:12:10.529 "uuid": "ccd218a8-bae8-4f69-abb2-adf382787829", 00:12:10.529 "strip_size_kb": 0, 00:12:10.529 "state": "online", 00:12:10.529 "raid_level": "raid1", 00:12:10.529 "superblock": true, 00:12:10.529 "num_base_bdevs": 4, 00:12:10.529 "num_base_bdevs_discovered": 4, 00:12:10.529 "num_base_bdevs_operational": 4, 00:12:10.529 "base_bdevs_list": [ 00:12:10.529 { 00:12:10.529 "name": "BaseBdev1", 00:12:10.529 "uuid": "19c41870-ba13-5604-bbf3-4d4fa30a87a1", 00:12:10.529 "is_configured": true, 00:12:10.529 "data_offset": 2048, 00:12:10.529 "data_size": 63488 00:12:10.529 }, 00:12:10.529 { 00:12:10.529 "name": "BaseBdev2", 00:12:10.529 "uuid": "6483f421-215d-5912-8da1-6f8e9739b86d", 00:12:10.529 "is_configured": true, 00:12:10.529 "data_offset": 2048, 00:12:10.529 "data_size": 63488 00:12:10.529 }, 00:12:10.529 { 00:12:10.529 "name": "BaseBdev3", 00:12:10.529 "uuid": "d201e662-2027-5d77-b185-011cb3916c4b", 00:12:10.529 "is_configured": true, 00:12:10.529 "data_offset": 2048, 00:12:10.529 "data_size": 63488 00:12:10.529 }, 00:12:10.529 { 00:12:10.529 "name": "BaseBdev4", 00:12:10.529 "uuid": "163cbfb1-8cb2-5e0c-bcb6-813d80d257f6", 00:12:10.529 "is_configured": true, 00:12:10.529 "data_offset": 2048, 00:12:10.529 "data_size": 63488 00:12:10.529 } 00:12:10.529 ] 00:12:10.529 }' 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.529 08:48:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.788 08:48:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:10.788 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.788 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.789 [2024-10-05 08:48:47.224665] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:10.789 [2024-10-05 08:48:47.224707] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.789 [2024-10-05 08:48:47.227461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.789 [2024-10-05 08:48:47.227530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.789 [2024-10-05 08:48:47.227658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.789 [2024-10-05 08:48:47.227676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:10.789 { 00:12:10.789 "results": [ 00:12:10.789 { 00:12:10.789 "job": "raid_bdev1", 00:12:10.789 "core_mask": "0x1", 00:12:10.789 "workload": "randrw", 00:12:10.789 "percentage": 50, 00:12:10.789 "status": "finished", 00:12:10.789 "queue_depth": 1, 00:12:10.789 "io_size": 131072, 00:12:10.789 "runtime": 1.350962, 00:12:10.789 "iops": 8115.698294992753, 00:12:10.789 "mibps": 1014.4622868740942, 00:12:10.789 "io_failed": 0, 00:12:10.789 "io_timeout": 0, 00:12:10.789 "avg_latency_us": 120.709148638896, 00:12:10.789 "min_latency_us": 22.581659388646287, 00:12:10.789 "max_latency_us": 1452.380786026201 00:12:10.789 } 00:12:10.789 ], 00:12:10.789 "core_count": 1 00:12:10.789 } 00:12:10.789 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.789 08:48:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73218 00:12:10.789 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73218 ']' 00:12:10.789 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73218 00:12:10.789 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:10.789 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.789 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73218 00:12:11.048 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:11.048 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:11.048 killing process with pid 73218 00:12:11.048 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73218' 00:12:11.048 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73218 00:12:11.048 [2024-10-05 08:48:47.277257] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:11.048 08:48:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73218 00:12:11.308 [2024-10-05 08:48:47.622541] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.689 08:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NjkIZqrxVp 00:12:12.689 08:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:12.689 08:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:12.689 08:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:12.689 08:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:12.689 08:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:12.689 08:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:12.689 08:48:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:12.689 00:12:12.689 real 0m4.922s 00:12:12.689 user 0m5.611s 00:12:12.689 sys 0m0.735s 00:12:12.689 08:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.689 08:48:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.689 ************************************ 00:12:12.689 END TEST raid_read_error_test 00:12:12.689 ************************************ 00:12:12.689 08:48:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:12.689 08:48:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:12.689 08:48:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.689 08:48:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.689 ************************************ 00:12:12.689 START TEST raid_write_error_test 00:12:12.689 ************************************ 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QSgB5erFg4 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73335 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73335 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73335 ']' 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.689 08:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.949 [2024-10-05 08:48:49.206690] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:12:12.949 [2024-10-05 08:48:49.206819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73335 ] 00:12:12.949 [2024-10-05 08:48:49.375835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.209 [2024-10-05 08:48:49.616682] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.469 [2024-10-05 08:48:49.856902] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.469 [2024-10-05 08:48:49.856941] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.729 BaseBdev1_malloc 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.729 true 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.729 [2024-10-05 08:48:50.095010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:13.729 [2024-10-05 08:48:50.095070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.729 [2024-10-05 08:48:50.095104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:13.729 [2024-10-05 08:48:50.095116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.729 [2024-10-05 08:48:50.097535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.729 [2024-10-05 08:48:50.097575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.729 BaseBdev1 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.729 BaseBdev2_malloc 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.729 true 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.729 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.989 [2024-10-05 08:48:50.200909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:13.989 [2024-10-05 08:48:50.200975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.989 [2024-10-05 08:48:50.200992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:13.989 [2024-10-05 08:48:50.201003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.989 [2024-10-05 08:48:50.203335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.989 [2024-10-05 08:48:50.203369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:13.989 BaseBdev2 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.989 BaseBdev3_malloc 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.989 true 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.989 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.989 [2024-10-05 08:48:50.273146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:13.989 [2024-10-05 08:48:50.273196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.989 [2024-10-05 08:48:50.273213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:13.989 [2024-10-05 08:48:50.273224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.989 [2024-10-05 08:48:50.275525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.990 [2024-10-05 08:48:50.275561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:13.990 BaseBdev3 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.990 BaseBdev4_malloc 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.990 true 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.990 [2024-10-05 08:48:50.347533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:13.990 [2024-10-05 08:48:50.347581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.990 [2024-10-05 08:48:50.347598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:13.990 [2024-10-05 08:48:50.347613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.990 [2024-10-05 08:48:50.349966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.990 [2024-10-05 08:48:50.350001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:13.990 BaseBdev4 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.990 [2024-10-05 08:48:50.359596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.990 [2024-10-05 08:48:50.361684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:13.990 [2024-10-05 08:48:50.361763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.990 [2024-10-05 08:48:50.361822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:13.990 [2024-10-05 08:48:50.362054] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:13.990 [2024-10-05 08:48:50.362074] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:13.990 [2024-10-05 08:48:50.362310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:13.990 [2024-10-05 08:48:50.362486] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:13.990 [2024-10-05 08:48:50.362501] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:13.990 [2024-10-05 08:48:50.362662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.990 "name": "raid_bdev1", 00:12:13.990 "uuid": "a777eefe-26b9-428e-a974-598a56c1dc09", 00:12:13.990 "strip_size_kb": 0, 00:12:13.990 "state": "online", 00:12:13.990 "raid_level": "raid1", 00:12:13.990 "superblock": true, 00:12:13.990 "num_base_bdevs": 4, 00:12:13.990 "num_base_bdevs_discovered": 4, 00:12:13.990 "num_base_bdevs_operational": 4, 00:12:13.990 "base_bdevs_list": [ 00:12:13.990 { 00:12:13.990 "name": "BaseBdev1", 00:12:13.990 "uuid": "685d78c3-12bb-55ff-969d-ea27b822c27d", 00:12:13.990 "is_configured": true, 00:12:13.990 "data_offset": 2048, 00:12:13.990 "data_size": 63488 00:12:13.990 }, 00:12:13.990 { 00:12:13.990 "name": "BaseBdev2", 00:12:13.990 "uuid": "3b11c79f-bbad-5cbf-8583-3ed945416461", 00:12:13.990 "is_configured": true, 00:12:13.990 "data_offset": 2048, 00:12:13.990 "data_size": 63488 00:12:13.990 }, 00:12:13.990 { 00:12:13.990 "name": "BaseBdev3", 00:12:13.990 "uuid": "7938c7bd-ab62-51dd-92af-34a285586428", 00:12:13.990 "is_configured": true, 00:12:13.990 "data_offset": 2048, 00:12:13.990 "data_size": 63488 00:12:13.990 }, 00:12:13.990 { 00:12:13.990 "name": "BaseBdev4", 00:12:13.990 "uuid": "9dda89fd-aacb-59c8-a4e6-3677514ef4e2", 00:12:13.990 "is_configured": true, 00:12:13.990 "data_offset": 2048, 00:12:13.990 "data_size": 63488 00:12:13.990 } 00:12:13.990 ] 00:12:13.990 }' 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.990 08:48:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.560 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:14.560 08:48:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:14.560 [2024-10-05 08:48:50.884333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:15.498 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:15.498 08:48:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.498 08:48:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.498 [2024-10-05 08:48:51.800895] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:15.498 [2024-10-05 08:48:51.800976] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:15.498 [2024-10-05 08:48:51.801229] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:15.498 08:48:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.498 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:15.498 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:15.498 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:15.498 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:15.498 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:15.498 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.498 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.498 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.499 "name": "raid_bdev1", 00:12:15.499 "uuid": "a777eefe-26b9-428e-a974-598a56c1dc09", 00:12:15.499 "strip_size_kb": 0, 00:12:15.499 "state": "online", 00:12:15.499 "raid_level": "raid1", 00:12:15.499 "superblock": true, 00:12:15.499 "num_base_bdevs": 4, 00:12:15.499 "num_base_bdevs_discovered": 3, 00:12:15.499 "num_base_bdevs_operational": 3, 00:12:15.499 "base_bdevs_list": [ 00:12:15.499 { 00:12:15.499 "name": null, 00:12:15.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.499 "is_configured": false, 00:12:15.499 "data_offset": 0, 00:12:15.499 "data_size": 63488 00:12:15.499 }, 00:12:15.499 { 00:12:15.499 "name": "BaseBdev2", 00:12:15.499 "uuid": "3b11c79f-bbad-5cbf-8583-3ed945416461", 00:12:15.499 "is_configured": true, 00:12:15.499 "data_offset": 2048, 00:12:15.499 "data_size": 63488 00:12:15.499 }, 00:12:15.499 { 00:12:15.499 "name": "BaseBdev3", 00:12:15.499 "uuid": "7938c7bd-ab62-51dd-92af-34a285586428", 00:12:15.499 "is_configured": true, 00:12:15.499 "data_offset": 2048, 00:12:15.499 "data_size": 63488 00:12:15.499 }, 00:12:15.499 { 00:12:15.499 "name": "BaseBdev4", 00:12:15.499 "uuid": "9dda89fd-aacb-59c8-a4e6-3677514ef4e2", 00:12:15.499 "is_configured": true, 00:12:15.499 "data_offset": 2048, 00:12:15.499 "data_size": 63488 00:12:15.499 } 00:12:15.499 ] 00:12:15.499 }' 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.499 08:48:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.067 [2024-10-05 08:48:52.287784] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.067 [2024-10-05 08:48:52.287823] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.067 [2024-10-05 08:48:52.290550] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.067 [2024-10-05 08:48:52.290601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.067 [2024-10-05 08:48:52.290711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.067 [2024-10-05 08:48:52.290726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:16.067 { 00:12:16.067 "results": [ 00:12:16.067 { 00:12:16.067 "job": "raid_bdev1", 00:12:16.067 "core_mask": "0x1", 00:12:16.067 "workload": "randrw", 00:12:16.067 "percentage": 50, 00:12:16.067 "status": "finished", 00:12:16.067 "queue_depth": 1, 00:12:16.067 "io_size": 131072, 00:12:16.067 "runtime": 1.404154, 00:12:16.067 "iops": 8994.027720606144, 00:12:16.067 "mibps": 1124.253465075768, 00:12:16.067 "io_failed": 0, 00:12:16.067 "io_timeout": 0, 00:12:16.067 "avg_latency_us": 108.68623342476819, 00:12:16.067 "min_latency_us": 22.246288209606988, 00:12:16.067 "max_latency_us": 1495.3082969432314 00:12:16.067 } 00:12:16.067 ], 00:12:16.067 "core_count": 1 00:12:16.067 } 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73335 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73335 ']' 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73335 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73335 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:16.067 killing process with pid 73335 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73335' 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73335 00:12:16.067 [2024-10-05 08:48:52.338693] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:16.067 08:48:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73335 00:12:16.325 [2024-10-05 08:48:52.679888] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.704 08:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QSgB5erFg4 00:12:17.704 08:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:17.704 08:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:17.704 08:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:17.704 08:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:17.704 08:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.704 08:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:17.704 08:48:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:17.704 00:12:17.704 real 0m4.975s 00:12:17.704 user 0m5.641s 00:12:17.704 sys 0m0.782s 00:12:17.704 08:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.704 08:48:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.704 ************************************ 00:12:17.705 END TEST raid_write_error_test 00:12:17.705 ************************************ 00:12:17.705 08:48:54 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:17.705 08:48:54 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:17.705 08:48:54 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:17.705 08:48:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:17.705 08:48:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.705 08:48:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.705 ************************************ 00:12:17.705 START TEST raid_rebuild_test 00:12:17.705 ************************************ 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=73457 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 73457 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 73457 ']' 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:17.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:17.705 08:48:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.964 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:17.964 Zero copy mechanism will not be used. 00:12:17.964 [2024-10-05 08:48:54.250895] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:12:17.964 [2024-10-05 08:48:54.251026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73457 ] 00:12:17.964 [2024-10-05 08:48:54.420137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.222 [2024-10-05 08:48:54.651154] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.481 [2024-10-05 08:48:54.885806] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.481 [2024-10-05 08:48:54.885843] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.739 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:18.739 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:18.739 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.739 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:18.739 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.739 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.739 BaseBdev1_malloc 00:12:18.739 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.739 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.740 [2024-10-05 08:48:55.122032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:18.740 [2024-10-05 08:48:55.122097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.740 [2024-10-05 08:48:55.122120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:18.740 [2024-10-05 08:48:55.122135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.740 [2024-10-05 08:48:55.124526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.740 [2024-10-05 08:48:55.124562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:18.740 BaseBdev1 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.740 BaseBdev2_malloc 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.740 [2024-10-05 08:48:55.192290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:18.740 [2024-10-05 08:48:55.192355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.740 [2024-10-05 08:48:55.192374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:18.740 [2024-10-05 08:48:55.192388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.740 [2024-10-05 08:48:55.194736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.740 [2024-10-05 08:48:55.194768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:18.740 BaseBdev2 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.740 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.999 spare_malloc 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.999 spare_delay 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.999 [2024-10-05 08:48:55.264080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:18.999 [2024-10-05 08:48:55.264136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.999 [2024-10-05 08:48:55.264157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:18.999 [2024-10-05 08:48:55.264169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.999 [2024-10-05 08:48:55.266503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.999 [2024-10-05 08:48:55.266538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:18.999 spare 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.999 [2024-10-05 08:48:55.276125] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:18.999 [2024-10-05 08:48:55.278162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.999 [2024-10-05 08:48:55.278249] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:18.999 [2024-10-05 08:48:55.278261] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:18.999 [2024-10-05 08:48:55.278527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:18.999 [2024-10-05 08:48:55.278691] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:18.999 [2024-10-05 08:48:55.278706] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:18.999 [2024-10-05 08:48:55.278846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.999 "name": "raid_bdev1", 00:12:18.999 "uuid": "ac6924e2-924b-4e64-bf55-11d848406d51", 00:12:18.999 "strip_size_kb": 0, 00:12:18.999 "state": "online", 00:12:18.999 "raid_level": "raid1", 00:12:18.999 "superblock": false, 00:12:18.999 "num_base_bdevs": 2, 00:12:18.999 "num_base_bdevs_discovered": 2, 00:12:18.999 "num_base_bdevs_operational": 2, 00:12:18.999 "base_bdevs_list": [ 00:12:18.999 { 00:12:18.999 "name": "BaseBdev1", 00:12:18.999 "uuid": "4d6314ac-9893-545c-b7c2-b6c9d8d62442", 00:12:18.999 "is_configured": true, 00:12:18.999 "data_offset": 0, 00:12:18.999 "data_size": 65536 00:12:18.999 }, 00:12:18.999 { 00:12:18.999 "name": "BaseBdev2", 00:12:18.999 "uuid": "66f17992-a82c-5e65-878e-4a76097474eb", 00:12:18.999 "is_configured": true, 00:12:18.999 "data_offset": 0, 00:12:18.999 "data_size": 65536 00:12:18.999 } 00:12:18.999 ] 00:12:18.999 }' 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.999 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.258 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.258 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.258 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.258 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:19.258 [2024-10-05 08:48:55.699651] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.258 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.517 08:48:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:19.517 [2024-10-05 08:48:55.979131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:19.776 /dev/nbd0 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:19.776 1+0 records in 00:12:19.776 1+0 records out 00:12:19.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375521 s, 10.9 MB/s 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:19.776 08:48:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:23.973 65536+0 records in 00:12:23.973 65536+0 records out 00:12:23.973 33554432 bytes (34 MB, 32 MiB) copied, 3.73156 s, 9.0 MB/s 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:23.973 [2024-10-05 08:48:59.977456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:23.973 08:48:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:23.973 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:23.973 08:49:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:23.973 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:23.973 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.973 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 [2024-10-05 08:49:00.005460] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.974 "name": "raid_bdev1", 00:12:23.974 "uuid": "ac6924e2-924b-4e64-bf55-11d848406d51", 00:12:23.974 "strip_size_kb": 0, 00:12:23.974 "state": "online", 00:12:23.974 "raid_level": "raid1", 00:12:23.974 "superblock": false, 00:12:23.974 "num_base_bdevs": 2, 00:12:23.974 "num_base_bdevs_discovered": 1, 00:12:23.974 "num_base_bdevs_operational": 1, 00:12:23.974 "base_bdevs_list": [ 00:12:23.974 { 00:12:23.974 "name": null, 00:12:23.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.974 "is_configured": false, 00:12:23.974 "data_offset": 0, 00:12:23.974 "data_size": 65536 00:12:23.974 }, 00:12:23.974 { 00:12:23.974 "name": "BaseBdev2", 00:12:23.974 "uuid": "66f17992-a82c-5e65-878e-4a76097474eb", 00:12:23.974 "is_configured": true, 00:12:23.974 "data_offset": 0, 00:12:23.974 "data_size": 65536 00:12:23.974 } 00:12:23.974 ] 00:12:23.974 }' 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.974 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.234 [2024-10-05 08:49:00.444746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.234 [2024-10-05 08:49:00.461099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:24.234 08:49:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.234 08:49:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:24.234 [2024-10-05 08:49:00.462856] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.172 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.172 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.172 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.172 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.172 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.172 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.172 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.172 08:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.172 08:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.172 08:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.172 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.172 "name": "raid_bdev1", 00:12:25.172 "uuid": "ac6924e2-924b-4e64-bf55-11d848406d51", 00:12:25.172 "strip_size_kb": 0, 00:12:25.172 "state": "online", 00:12:25.172 "raid_level": "raid1", 00:12:25.172 "superblock": false, 00:12:25.172 "num_base_bdevs": 2, 00:12:25.172 "num_base_bdevs_discovered": 2, 00:12:25.172 "num_base_bdevs_operational": 2, 00:12:25.172 "process": { 00:12:25.172 "type": "rebuild", 00:12:25.172 "target": "spare", 00:12:25.172 "progress": { 00:12:25.172 "blocks": 20480, 00:12:25.172 "percent": 31 00:12:25.172 } 00:12:25.172 }, 00:12:25.172 "base_bdevs_list": [ 00:12:25.172 { 00:12:25.172 "name": "spare", 00:12:25.172 "uuid": "89351d1a-ed40-574d-b73a-aac56222a68f", 00:12:25.172 "is_configured": true, 00:12:25.173 "data_offset": 0, 00:12:25.173 "data_size": 65536 00:12:25.173 }, 00:12:25.173 { 00:12:25.173 "name": "BaseBdev2", 00:12:25.173 "uuid": "66f17992-a82c-5e65-878e-4a76097474eb", 00:12:25.173 "is_configured": true, 00:12:25.173 "data_offset": 0, 00:12:25.173 "data_size": 65536 00:12:25.173 } 00:12:25.173 ] 00:12:25.173 }' 00:12:25.173 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.173 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.173 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.173 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.173 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:25.173 08:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.173 08:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.173 [2024-10-05 08:49:01.602328] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.432 [2024-10-05 08:49:01.667796] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:25.432 [2024-10-05 08:49:01.667908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.432 [2024-10-05 08:49:01.667945] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:25.432 [2024-10-05 08:49:01.667982] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.432 "name": "raid_bdev1", 00:12:25.432 "uuid": "ac6924e2-924b-4e64-bf55-11d848406d51", 00:12:25.432 "strip_size_kb": 0, 00:12:25.432 "state": "online", 00:12:25.432 "raid_level": "raid1", 00:12:25.432 "superblock": false, 00:12:25.432 "num_base_bdevs": 2, 00:12:25.432 "num_base_bdevs_discovered": 1, 00:12:25.432 "num_base_bdevs_operational": 1, 00:12:25.432 "base_bdevs_list": [ 00:12:25.432 { 00:12:25.432 "name": null, 00:12:25.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.432 "is_configured": false, 00:12:25.432 "data_offset": 0, 00:12:25.432 "data_size": 65536 00:12:25.432 }, 00:12:25.432 { 00:12:25.432 "name": "BaseBdev2", 00:12:25.432 "uuid": "66f17992-a82c-5e65-878e-4a76097474eb", 00:12:25.432 "is_configured": true, 00:12:25.432 "data_offset": 0, 00:12:25.432 "data_size": 65536 00:12:25.432 } 00:12:25.432 ] 00:12:25.432 }' 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.432 08:49:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.692 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:25.692 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.692 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:25.692 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:25.692 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.692 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.692 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.692 08:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.692 08:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.692 08:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.952 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.952 "name": "raid_bdev1", 00:12:25.952 "uuid": "ac6924e2-924b-4e64-bf55-11d848406d51", 00:12:25.952 "strip_size_kb": 0, 00:12:25.952 "state": "online", 00:12:25.952 "raid_level": "raid1", 00:12:25.952 "superblock": false, 00:12:25.952 "num_base_bdevs": 2, 00:12:25.952 "num_base_bdevs_discovered": 1, 00:12:25.952 "num_base_bdevs_operational": 1, 00:12:25.952 "base_bdevs_list": [ 00:12:25.952 { 00:12:25.952 "name": null, 00:12:25.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.952 "is_configured": false, 00:12:25.952 "data_offset": 0, 00:12:25.952 "data_size": 65536 00:12:25.952 }, 00:12:25.952 { 00:12:25.952 "name": "BaseBdev2", 00:12:25.952 "uuid": "66f17992-a82c-5e65-878e-4a76097474eb", 00:12:25.952 "is_configured": true, 00:12:25.952 "data_offset": 0, 00:12:25.952 "data_size": 65536 00:12:25.952 } 00:12:25.952 ] 00:12:25.952 }' 00:12:25.952 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.952 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:25.952 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.952 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:25.952 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:25.952 08:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.952 08:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.952 [2024-10-05 08:49:02.263513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:25.952 [2024-10-05 08:49:02.278326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:25.952 08:49:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.952 08:49:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:25.952 [2024-10-05 08:49:02.280104] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:26.890 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.890 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.890 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.890 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.890 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.890 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.890 08:49:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.890 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.890 08:49:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.890 08:49:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.890 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.890 "name": "raid_bdev1", 00:12:26.890 "uuid": "ac6924e2-924b-4e64-bf55-11d848406d51", 00:12:26.890 "strip_size_kb": 0, 00:12:26.890 "state": "online", 00:12:26.890 "raid_level": "raid1", 00:12:26.890 "superblock": false, 00:12:26.890 "num_base_bdevs": 2, 00:12:26.890 "num_base_bdevs_discovered": 2, 00:12:26.890 "num_base_bdevs_operational": 2, 00:12:26.890 "process": { 00:12:26.890 "type": "rebuild", 00:12:26.890 "target": "spare", 00:12:26.890 "progress": { 00:12:26.890 "blocks": 20480, 00:12:26.890 "percent": 31 00:12:26.890 } 00:12:26.890 }, 00:12:26.890 "base_bdevs_list": [ 00:12:26.890 { 00:12:26.890 "name": "spare", 00:12:26.890 "uuid": "89351d1a-ed40-574d-b73a-aac56222a68f", 00:12:26.890 "is_configured": true, 00:12:26.890 "data_offset": 0, 00:12:26.890 "data_size": 65536 00:12:26.890 }, 00:12:26.890 { 00:12:26.890 "name": "BaseBdev2", 00:12:26.890 "uuid": "66f17992-a82c-5e65-878e-4a76097474eb", 00:12:26.890 "is_configured": true, 00:12:26.890 "data_offset": 0, 00:12:26.890 "data_size": 65536 00:12:26.890 } 00:12:26.890 ] 00:12:26.890 }' 00:12:26.891 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=375 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.151 "name": "raid_bdev1", 00:12:27.151 "uuid": "ac6924e2-924b-4e64-bf55-11d848406d51", 00:12:27.151 "strip_size_kb": 0, 00:12:27.151 "state": "online", 00:12:27.151 "raid_level": "raid1", 00:12:27.151 "superblock": false, 00:12:27.151 "num_base_bdevs": 2, 00:12:27.151 "num_base_bdevs_discovered": 2, 00:12:27.151 "num_base_bdevs_operational": 2, 00:12:27.151 "process": { 00:12:27.151 "type": "rebuild", 00:12:27.151 "target": "spare", 00:12:27.151 "progress": { 00:12:27.151 "blocks": 22528, 00:12:27.151 "percent": 34 00:12:27.151 } 00:12:27.151 }, 00:12:27.151 "base_bdevs_list": [ 00:12:27.151 { 00:12:27.151 "name": "spare", 00:12:27.151 "uuid": "89351d1a-ed40-574d-b73a-aac56222a68f", 00:12:27.151 "is_configured": true, 00:12:27.151 "data_offset": 0, 00:12:27.151 "data_size": 65536 00:12:27.151 }, 00:12:27.151 { 00:12:27.151 "name": "BaseBdev2", 00:12:27.151 "uuid": "66f17992-a82c-5e65-878e-4a76097474eb", 00:12:27.151 "is_configured": true, 00:12:27.151 "data_offset": 0, 00:12:27.151 "data_size": 65536 00:12:27.151 } 00:12:27.151 ] 00:12:27.151 }' 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.151 08:49:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.535 "name": "raid_bdev1", 00:12:28.535 "uuid": "ac6924e2-924b-4e64-bf55-11d848406d51", 00:12:28.535 "strip_size_kb": 0, 00:12:28.535 "state": "online", 00:12:28.535 "raid_level": "raid1", 00:12:28.535 "superblock": false, 00:12:28.535 "num_base_bdevs": 2, 00:12:28.535 "num_base_bdevs_discovered": 2, 00:12:28.535 "num_base_bdevs_operational": 2, 00:12:28.535 "process": { 00:12:28.535 "type": "rebuild", 00:12:28.535 "target": "spare", 00:12:28.535 "progress": { 00:12:28.535 "blocks": 47104, 00:12:28.535 "percent": 71 00:12:28.535 } 00:12:28.535 }, 00:12:28.535 "base_bdevs_list": [ 00:12:28.535 { 00:12:28.535 "name": "spare", 00:12:28.535 "uuid": "89351d1a-ed40-574d-b73a-aac56222a68f", 00:12:28.535 "is_configured": true, 00:12:28.535 "data_offset": 0, 00:12:28.535 "data_size": 65536 00:12:28.535 }, 00:12:28.535 { 00:12:28.535 "name": "BaseBdev2", 00:12:28.535 "uuid": "66f17992-a82c-5e65-878e-4a76097474eb", 00:12:28.535 "is_configured": true, 00:12:28.535 "data_offset": 0, 00:12:28.535 "data_size": 65536 00:12:28.535 } 00:12:28.535 ] 00:12:28.535 }' 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.535 08:49:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:29.105 [2024-10-05 08:49:05.493369] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:29.105 [2024-10-05 08:49:05.493457] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:29.105 [2024-10-05 08:49:05.493526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.364 "name": "raid_bdev1", 00:12:29.364 "uuid": "ac6924e2-924b-4e64-bf55-11d848406d51", 00:12:29.364 "strip_size_kb": 0, 00:12:29.364 "state": "online", 00:12:29.364 "raid_level": "raid1", 00:12:29.364 "superblock": false, 00:12:29.364 "num_base_bdevs": 2, 00:12:29.364 "num_base_bdevs_discovered": 2, 00:12:29.364 "num_base_bdevs_operational": 2, 00:12:29.364 "base_bdevs_list": [ 00:12:29.364 { 00:12:29.364 "name": "spare", 00:12:29.364 "uuid": "89351d1a-ed40-574d-b73a-aac56222a68f", 00:12:29.364 "is_configured": true, 00:12:29.364 "data_offset": 0, 00:12:29.364 "data_size": 65536 00:12:29.364 }, 00:12:29.364 { 00:12:29.364 "name": "BaseBdev2", 00:12:29.364 "uuid": "66f17992-a82c-5e65-878e-4a76097474eb", 00:12:29.364 "is_configured": true, 00:12:29.364 "data_offset": 0, 00:12:29.364 "data_size": 65536 00:12:29.364 } 00:12:29.364 ] 00:12:29.364 }' 00:12:29.364 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.624 "name": "raid_bdev1", 00:12:29.624 "uuid": "ac6924e2-924b-4e64-bf55-11d848406d51", 00:12:29.624 "strip_size_kb": 0, 00:12:29.624 "state": "online", 00:12:29.624 "raid_level": "raid1", 00:12:29.624 "superblock": false, 00:12:29.624 "num_base_bdevs": 2, 00:12:29.624 "num_base_bdevs_discovered": 2, 00:12:29.624 "num_base_bdevs_operational": 2, 00:12:29.624 "base_bdevs_list": [ 00:12:29.624 { 00:12:29.624 "name": "spare", 00:12:29.624 "uuid": "89351d1a-ed40-574d-b73a-aac56222a68f", 00:12:29.624 "is_configured": true, 00:12:29.624 "data_offset": 0, 00:12:29.624 "data_size": 65536 00:12:29.624 }, 00:12:29.624 { 00:12:29.624 "name": "BaseBdev2", 00:12:29.624 "uuid": "66f17992-a82c-5e65-878e-4a76097474eb", 00:12:29.624 "is_configured": true, 00:12:29.624 "data_offset": 0, 00:12:29.624 "data_size": 65536 00:12:29.624 } 00:12:29.624 ] 00:12:29.624 }' 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:29.624 08:49:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.624 "name": "raid_bdev1", 00:12:29.624 "uuid": "ac6924e2-924b-4e64-bf55-11d848406d51", 00:12:29.624 "strip_size_kb": 0, 00:12:29.624 "state": "online", 00:12:29.624 "raid_level": "raid1", 00:12:29.624 "superblock": false, 00:12:29.624 "num_base_bdevs": 2, 00:12:29.624 "num_base_bdevs_discovered": 2, 00:12:29.624 "num_base_bdevs_operational": 2, 00:12:29.624 "base_bdevs_list": [ 00:12:29.624 { 00:12:29.624 "name": "spare", 00:12:29.624 "uuid": "89351d1a-ed40-574d-b73a-aac56222a68f", 00:12:29.624 "is_configured": true, 00:12:29.624 "data_offset": 0, 00:12:29.624 "data_size": 65536 00:12:29.624 }, 00:12:29.624 { 00:12:29.624 "name": "BaseBdev2", 00:12:29.624 "uuid": "66f17992-a82c-5e65-878e-4a76097474eb", 00:12:29.624 "is_configured": true, 00:12:29.624 "data_offset": 0, 00:12:29.624 "data_size": 65536 00:12:29.624 } 00:12:29.624 ] 00:12:29.624 }' 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.624 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.193 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.193 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.193 [2024-10-05 08:49:06.499736] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.193 [2024-10-05 08:49:06.499771] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.193 [2024-10-05 08:49:06.499847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.193 [2024-10-05 08:49:06.499925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.194 [2024-10-05 08:49:06.499935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.194 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:30.453 /dev/nbd0 00:12:30.453 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:30.453 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.454 1+0 records in 00:12:30.454 1+0 records out 00:12:30.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419201 s, 9.8 MB/s 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.454 08:49:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:30.714 /dev/nbd1 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.714 1+0 records in 00:12:30.714 1+0 records out 00:12:30.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486742 s, 8.4 MB/s 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.714 08:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:30.974 08:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:30.974 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.974 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:30.974 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.974 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:30.974 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.974 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:31.234 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:31.234 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:31.234 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:31.234 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.234 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.234 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:31.234 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:31.234 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.234 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.234 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 73457 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 73457 ']' 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 73457 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73457 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73457' 00:12:31.235 killing process with pid 73457 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 73457 00:12:31.235 Received shutdown signal, test time was about 60.000000 seconds 00:12:31.235 00:12:31.235 Latency(us) 00:12:31.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.235 =================================================================================================================== 00:12:31.235 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:31.235 [2024-10-05 08:49:07.698348] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.235 08:49:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 73457 00:12:31.805 [2024-10-05 08:49:07.988128] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.751 08:49:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:32.751 00:12:32.752 real 0m15.045s 00:12:32.752 user 0m16.939s 00:12:32.752 sys 0m3.008s 00:12:32.752 ************************************ 00:12:32.752 END TEST raid_rebuild_test 00:12:32.752 ************************************ 00:12:32.752 08:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.752 08:49:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.012 08:49:09 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:33.012 08:49:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:33.012 08:49:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.012 08:49:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:33.012 ************************************ 00:12:33.012 START TEST raid_rebuild_test_sb 00:12:33.012 ************************************ 00:12:33.012 08:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:12:33.012 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:33.012 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=73790 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 73790 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73790 ']' 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.013 08:49:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.013 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:33.013 Zero copy mechanism will not be used. 00:12:33.013 [2024-10-05 08:49:09.367916] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:12:33.013 [2024-10-05 08:49:09.368051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73790 ] 00:12:33.273 [2024-10-05 08:49:09.529682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.273 [2024-10-05 08:49:09.724395] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.532 [2024-10-05 08:49:09.917431] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.532 [2024-10-05 08:49:09.917548] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.793 BaseBdev1_malloc 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.793 [2024-10-05 08:49:10.217368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:33.793 [2024-10-05 08:49:10.217457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.793 [2024-10-05 08:49:10.217483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:33.793 [2024-10-05 08:49:10.217497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.793 [2024-10-05 08:49:10.219437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.793 [2024-10-05 08:49:10.219477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.793 BaseBdev1 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.793 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.054 BaseBdev2_malloc 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.054 [2024-10-05 08:49:10.302992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:34.054 [2024-10-05 08:49:10.303052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.054 [2024-10-05 08:49:10.303072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:34.054 [2024-10-05 08:49:10.303082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.054 [2024-10-05 08:49:10.304943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.054 [2024-10-05 08:49:10.305071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:34.054 BaseBdev2 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.054 spare_malloc 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.054 spare_delay 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.054 [2024-10-05 08:49:10.369398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:34.054 [2024-10-05 08:49:10.369453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.054 [2024-10-05 08:49:10.369470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:34.054 [2024-10-05 08:49:10.369481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.054 [2024-10-05 08:49:10.371409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.054 [2024-10-05 08:49:10.371517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:34.054 spare 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.054 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.054 [2024-10-05 08:49:10.381432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.054 [2024-10-05 08:49:10.383165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.054 [2024-10-05 08:49:10.383321] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:34.055 [2024-10-05 08:49:10.383335] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:34.055 [2024-10-05 08:49:10.383572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:34.055 [2024-10-05 08:49:10.383715] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:34.055 [2024-10-05 08:49:10.383724] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:34.055 [2024-10-05 08:49:10.383869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.055 "name": "raid_bdev1", 00:12:34.055 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:34.055 "strip_size_kb": 0, 00:12:34.055 "state": "online", 00:12:34.055 "raid_level": "raid1", 00:12:34.055 "superblock": true, 00:12:34.055 "num_base_bdevs": 2, 00:12:34.055 "num_base_bdevs_discovered": 2, 00:12:34.055 "num_base_bdevs_operational": 2, 00:12:34.055 "base_bdevs_list": [ 00:12:34.055 { 00:12:34.055 "name": "BaseBdev1", 00:12:34.055 "uuid": "80c672a6-b909-58d4-88e0-e7802420391f", 00:12:34.055 "is_configured": true, 00:12:34.055 "data_offset": 2048, 00:12:34.055 "data_size": 63488 00:12:34.055 }, 00:12:34.055 { 00:12:34.055 "name": "BaseBdev2", 00:12:34.055 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:34.055 "is_configured": true, 00:12:34.055 "data_offset": 2048, 00:12:34.055 "data_size": 63488 00:12:34.055 } 00:12:34.055 ] 00:12:34.055 }' 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.055 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.626 [2024-10-05 08:49:10.812991] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:34.626 08:49:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:34.626 [2024-10-05 08:49:11.064304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:34.626 /dev/nbd0 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.886 1+0 records in 00:12:34.886 1+0 records out 00:12:34.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528956 s, 7.7 MB/s 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:34.886 08:49:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:39.076 63488+0 records in 00:12:39.076 63488+0 records out 00:12:39.076 32505856 bytes (33 MB, 31 MiB) copied, 4.05291 s, 8.0 MB/s 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:39.076 [2024-10-05 08:49:15.379009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.076 [2024-10-05 08:49:15.411015] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.076 "name": "raid_bdev1", 00:12:39.076 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:39.076 "strip_size_kb": 0, 00:12:39.076 "state": "online", 00:12:39.076 "raid_level": "raid1", 00:12:39.076 "superblock": true, 00:12:39.076 "num_base_bdevs": 2, 00:12:39.076 "num_base_bdevs_discovered": 1, 00:12:39.076 "num_base_bdevs_operational": 1, 00:12:39.076 "base_bdevs_list": [ 00:12:39.076 { 00:12:39.076 "name": null, 00:12:39.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.076 "is_configured": false, 00:12:39.076 "data_offset": 0, 00:12:39.076 "data_size": 63488 00:12:39.076 }, 00:12:39.076 { 00:12:39.076 "name": "BaseBdev2", 00:12:39.076 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:39.076 "is_configured": true, 00:12:39.076 "data_offset": 2048, 00:12:39.076 "data_size": 63488 00:12:39.076 } 00:12:39.076 ] 00:12:39.076 }' 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.076 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.645 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:39.645 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.645 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.645 [2024-10-05 08:49:15.830304] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:39.645 [2024-10-05 08:49:15.846466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:39.645 [2024-10-05 08:49:15.848250] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.645 08:49:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.645 08:49:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:40.584 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.584 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.584 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.584 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.584 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.584 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.584 08:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.584 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.584 08:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.584 08:49:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.584 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.584 "name": "raid_bdev1", 00:12:40.584 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:40.584 "strip_size_kb": 0, 00:12:40.584 "state": "online", 00:12:40.584 "raid_level": "raid1", 00:12:40.584 "superblock": true, 00:12:40.584 "num_base_bdevs": 2, 00:12:40.584 "num_base_bdevs_discovered": 2, 00:12:40.584 "num_base_bdevs_operational": 2, 00:12:40.584 "process": { 00:12:40.584 "type": "rebuild", 00:12:40.584 "target": "spare", 00:12:40.584 "progress": { 00:12:40.584 "blocks": 20480, 00:12:40.584 "percent": 32 00:12:40.584 } 00:12:40.584 }, 00:12:40.584 "base_bdevs_list": [ 00:12:40.584 { 00:12:40.584 "name": "spare", 00:12:40.584 "uuid": "2048b15f-65e5-5ed9-82cb-74eb0c3b6794", 00:12:40.584 "is_configured": true, 00:12:40.585 "data_offset": 2048, 00:12:40.585 "data_size": 63488 00:12:40.585 }, 00:12:40.585 { 00:12:40.585 "name": "BaseBdev2", 00:12:40.585 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:40.585 "is_configured": true, 00:12:40.585 "data_offset": 2048, 00:12:40.585 "data_size": 63488 00:12:40.585 } 00:12:40.585 ] 00:12:40.585 }' 00:12:40.585 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.585 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.585 08:49:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.585 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.585 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:40.585 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.585 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.585 [2024-10-05 08:49:17.016065] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.585 [2024-10-05 08:49:17.052958] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:40.585 [2024-10-05 08:49:17.053028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.585 [2024-10-05 08:49:17.053043] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:40.585 [2024-10-05 08:49:17.053052] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:40.844 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.844 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:40.844 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.844 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.844 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.844 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.845 "name": "raid_bdev1", 00:12:40.845 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:40.845 "strip_size_kb": 0, 00:12:40.845 "state": "online", 00:12:40.845 "raid_level": "raid1", 00:12:40.845 "superblock": true, 00:12:40.845 "num_base_bdevs": 2, 00:12:40.845 "num_base_bdevs_discovered": 1, 00:12:40.845 "num_base_bdevs_operational": 1, 00:12:40.845 "base_bdevs_list": [ 00:12:40.845 { 00:12:40.845 "name": null, 00:12:40.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.845 "is_configured": false, 00:12:40.845 "data_offset": 0, 00:12:40.845 "data_size": 63488 00:12:40.845 }, 00:12:40.845 { 00:12:40.845 "name": "BaseBdev2", 00:12:40.845 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:40.845 "is_configured": true, 00:12:40.845 "data_offset": 2048, 00:12:40.845 "data_size": 63488 00:12:40.845 } 00:12:40.845 ] 00:12:40.845 }' 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.845 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.104 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:41.104 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.104 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:41.104 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:41.104 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.104 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.104 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.104 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.104 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.104 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.104 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.104 "name": "raid_bdev1", 00:12:41.104 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:41.104 "strip_size_kb": 0, 00:12:41.104 "state": "online", 00:12:41.104 "raid_level": "raid1", 00:12:41.104 "superblock": true, 00:12:41.104 "num_base_bdevs": 2, 00:12:41.104 "num_base_bdevs_discovered": 1, 00:12:41.104 "num_base_bdevs_operational": 1, 00:12:41.104 "base_bdevs_list": [ 00:12:41.104 { 00:12:41.104 "name": null, 00:12:41.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.104 "is_configured": false, 00:12:41.104 "data_offset": 0, 00:12:41.104 "data_size": 63488 00:12:41.104 }, 00:12:41.104 { 00:12:41.104 "name": "BaseBdev2", 00:12:41.104 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:41.104 "is_configured": true, 00:12:41.105 "data_offset": 2048, 00:12:41.105 "data_size": 63488 00:12:41.105 } 00:12:41.105 ] 00:12:41.105 }' 00:12:41.105 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.364 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.364 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.364 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.364 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:41.364 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.364 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.364 [2024-10-05 08:49:17.635944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:41.364 [2024-10-05 08:49:17.650576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:41.365 08:49:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.365 08:49:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:41.365 [2024-10-05 08:49:17.652308] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.306 "name": "raid_bdev1", 00:12:42.306 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:42.306 "strip_size_kb": 0, 00:12:42.306 "state": "online", 00:12:42.306 "raid_level": "raid1", 00:12:42.306 "superblock": true, 00:12:42.306 "num_base_bdevs": 2, 00:12:42.306 "num_base_bdevs_discovered": 2, 00:12:42.306 "num_base_bdevs_operational": 2, 00:12:42.306 "process": { 00:12:42.306 "type": "rebuild", 00:12:42.306 "target": "spare", 00:12:42.306 "progress": { 00:12:42.306 "blocks": 20480, 00:12:42.306 "percent": 32 00:12:42.306 } 00:12:42.306 }, 00:12:42.306 "base_bdevs_list": [ 00:12:42.306 { 00:12:42.306 "name": "spare", 00:12:42.306 "uuid": "2048b15f-65e5-5ed9-82cb-74eb0c3b6794", 00:12:42.306 "is_configured": true, 00:12:42.306 "data_offset": 2048, 00:12:42.306 "data_size": 63488 00:12:42.306 }, 00:12:42.306 { 00:12:42.306 "name": "BaseBdev2", 00:12:42.306 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:42.306 "is_configured": true, 00:12:42.306 "data_offset": 2048, 00:12:42.306 "data_size": 63488 00:12:42.306 } 00:12:42.306 ] 00:12:42.306 }' 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.306 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:42.588 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=390 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.588 "name": "raid_bdev1", 00:12:42.588 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:42.588 "strip_size_kb": 0, 00:12:42.588 "state": "online", 00:12:42.588 "raid_level": "raid1", 00:12:42.588 "superblock": true, 00:12:42.588 "num_base_bdevs": 2, 00:12:42.588 "num_base_bdevs_discovered": 2, 00:12:42.588 "num_base_bdevs_operational": 2, 00:12:42.588 "process": { 00:12:42.588 "type": "rebuild", 00:12:42.588 "target": "spare", 00:12:42.588 "progress": { 00:12:42.588 "blocks": 22528, 00:12:42.588 "percent": 35 00:12:42.588 } 00:12:42.588 }, 00:12:42.588 "base_bdevs_list": [ 00:12:42.588 { 00:12:42.588 "name": "spare", 00:12:42.588 "uuid": "2048b15f-65e5-5ed9-82cb-74eb0c3b6794", 00:12:42.588 "is_configured": true, 00:12:42.588 "data_offset": 2048, 00:12:42.588 "data_size": 63488 00:12:42.588 }, 00:12:42.588 { 00:12:42.588 "name": "BaseBdev2", 00:12:42.588 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:42.588 "is_configured": true, 00:12:42.588 "data_offset": 2048, 00:12:42.588 "data_size": 63488 00:12:42.588 } 00:12:42.588 ] 00:12:42.588 }' 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.588 08:49:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:43.541 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:43.541 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.541 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.541 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.541 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.541 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.541 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.541 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.541 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.541 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.541 08:49:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.541 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.541 "name": "raid_bdev1", 00:12:43.541 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:43.541 "strip_size_kb": 0, 00:12:43.541 "state": "online", 00:12:43.541 "raid_level": "raid1", 00:12:43.541 "superblock": true, 00:12:43.541 "num_base_bdevs": 2, 00:12:43.541 "num_base_bdevs_discovered": 2, 00:12:43.541 "num_base_bdevs_operational": 2, 00:12:43.541 "process": { 00:12:43.541 "type": "rebuild", 00:12:43.541 "target": "spare", 00:12:43.541 "progress": { 00:12:43.541 "blocks": 45056, 00:12:43.541 "percent": 70 00:12:43.541 } 00:12:43.541 }, 00:12:43.541 "base_bdevs_list": [ 00:12:43.541 { 00:12:43.541 "name": "spare", 00:12:43.541 "uuid": "2048b15f-65e5-5ed9-82cb-74eb0c3b6794", 00:12:43.541 "is_configured": true, 00:12:43.542 "data_offset": 2048, 00:12:43.542 "data_size": 63488 00:12:43.542 }, 00:12:43.542 { 00:12:43.542 "name": "BaseBdev2", 00:12:43.542 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:43.542 "is_configured": true, 00:12:43.542 "data_offset": 2048, 00:12:43.542 "data_size": 63488 00:12:43.542 } 00:12:43.542 ] 00:12:43.542 }' 00:12:43.542 08:49:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.801 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.801 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.801 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.801 08:49:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:44.371 [2024-10-05 08:49:20.763938] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:44.371 [2024-10-05 08:49:20.764100] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:44.371 [2024-10-05 08:49:20.764218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.631 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:44.631 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.631 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.631 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.631 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.631 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.631 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.631 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.631 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.631 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.631 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.892 "name": "raid_bdev1", 00:12:44.892 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:44.892 "strip_size_kb": 0, 00:12:44.892 "state": "online", 00:12:44.892 "raid_level": "raid1", 00:12:44.892 "superblock": true, 00:12:44.892 "num_base_bdevs": 2, 00:12:44.892 "num_base_bdevs_discovered": 2, 00:12:44.892 "num_base_bdevs_operational": 2, 00:12:44.892 "base_bdevs_list": [ 00:12:44.892 { 00:12:44.892 "name": "spare", 00:12:44.892 "uuid": "2048b15f-65e5-5ed9-82cb-74eb0c3b6794", 00:12:44.892 "is_configured": true, 00:12:44.892 "data_offset": 2048, 00:12:44.892 "data_size": 63488 00:12:44.892 }, 00:12:44.892 { 00:12:44.892 "name": "BaseBdev2", 00:12:44.892 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:44.892 "is_configured": true, 00:12:44.892 "data_offset": 2048, 00:12:44.892 "data_size": 63488 00:12:44.892 } 00:12:44.892 ] 00:12:44.892 }' 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.892 "name": "raid_bdev1", 00:12:44.892 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:44.892 "strip_size_kb": 0, 00:12:44.892 "state": "online", 00:12:44.892 "raid_level": "raid1", 00:12:44.892 "superblock": true, 00:12:44.892 "num_base_bdevs": 2, 00:12:44.892 "num_base_bdevs_discovered": 2, 00:12:44.892 "num_base_bdevs_operational": 2, 00:12:44.892 "base_bdevs_list": [ 00:12:44.892 { 00:12:44.892 "name": "spare", 00:12:44.892 "uuid": "2048b15f-65e5-5ed9-82cb-74eb0c3b6794", 00:12:44.892 "is_configured": true, 00:12:44.892 "data_offset": 2048, 00:12:44.892 "data_size": 63488 00:12:44.892 }, 00:12:44.892 { 00:12:44.892 "name": "BaseBdev2", 00:12:44.892 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:44.892 "is_configured": true, 00:12:44.892 "data_offset": 2048, 00:12:44.892 "data_size": 63488 00:12:44.892 } 00:12:44.892 ] 00:12:44.892 }' 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.892 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.152 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.152 "name": "raid_bdev1", 00:12:45.152 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:45.152 "strip_size_kb": 0, 00:12:45.152 "state": "online", 00:12:45.152 "raid_level": "raid1", 00:12:45.152 "superblock": true, 00:12:45.152 "num_base_bdevs": 2, 00:12:45.152 "num_base_bdevs_discovered": 2, 00:12:45.152 "num_base_bdevs_operational": 2, 00:12:45.152 "base_bdevs_list": [ 00:12:45.152 { 00:12:45.152 "name": "spare", 00:12:45.152 "uuid": "2048b15f-65e5-5ed9-82cb-74eb0c3b6794", 00:12:45.152 "is_configured": true, 00:12:45.152 "data_offset": 2048, 00:12:45.152 "data_size": 63488 00:12:45.152 }, 00:12:45.152 { 00:12:45.152 "name": "BaseBdev2", 00:12:45.152 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:45.152 "is_configured": true, 00:12:45.152 "data_offset": 2048, 00:12:45.152 "data_size": 63488 00:12:45.152 } 00:12:45.152 ] 00:12:45.152 }' 00:12:45.152 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.152 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.413 [2024-10-05 08:49:21.757091] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.413 [2024-10-05 08:49:21.757170] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.413 [2024-10-05 08:49:21.757265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.413 [2024-10-05 08:49:21.757343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.413 [2024-10-05 08:49:21.757385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:45.413 08:49:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:45.673 /dev/nbd0 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.673 1+0 records in 00:12:45.673 1+0 records out 00:12:45.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037725 s, 10.9 MB/s 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:45.673 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:45.932 /dev/nbd1 00:12:45.932 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:45.932 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:45.932 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:45.932 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:45.932 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:45.932 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.933 1+0 records in 00:12:45.933 1+0 records out 00:12:45.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365486 s, 11.2 MB/s 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:45.933 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:46.192 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:46.192 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.192 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:46.192 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.192 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:46.192 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.192 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.452 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.711 [2024-10-05 08:49:22.927432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:46.711 [2024-10-05 08:49:22.927481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.711 [2024-10-05 08:49:22.927518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:46.711 [2024-10-05 08:49:22.927528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.711 [2024-10-05 08:49:22.929679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.711 [2024-10-05 08:49:22.929717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:46.711 [2024-10-05 08:49:22.929801] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:46.711 [2024-10-05 08:49:22.929847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.711 [2024-10-05 08:49:22.929999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.711 spare 00:12:46.711 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.711 08:49:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:46.711 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.711 08:49:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.711 [2024-10-05 08:49:23.029911] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:46.711 [2024-10-05 08:49:23.029941] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:46.711 [2024-10-05 08:49:23.030227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:46.711 [2024-10-05 08:49:23.030388] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:46.711 [2024-10-05 08:49:23.030398] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:46.711 [2024-10-05 08:49:23.030543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.711 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.711 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:46.711 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.711 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.711 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.712 "name": "raid_bdev1", 00:12:46.712 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:46.712 "strip_size_kb": 0, 00:12:46.712 "state": "online", 00:12:46.712 "raid_level": "raid1", 00:12:46.712 "superblock": true, 00:12:46.712 "num_base_bdevs": 2, 00:12:46.712 "num_base_bdevs_discovered": 2, 00:12:46.712 "num_base_bdevs_operational": 2, 00:12:46.712 "base_bdevs_list": [ 00:12:46.712 { 00:12:46.712 "name": "spare", 00:12:46.712 "uuid": "2048b15f-65e5-5ed9-82cb-74eb0c3b6794", 00:12:46.712 "is_configured": true, 00:12:46.712 "data_offset": 2048, 00:12:46.712 "data_size": 63488 00:12:46.712 }, 00:12:46.712 { 00:12:46.712 "name": "BaseBdev2", 00:12:46.712 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:46.712 "is_configured": true, 00:12:46.712 "data_offset": 2048, 00:12:46.712 "data_size": 63488 00:12:46.712 } 00:12:46.712 ] 00:12:46.712 }' 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.712 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.279 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.279 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.279 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.279 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.279 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.280 "name": "raid_bdev1", 00:12:47.280 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:47.280 "strip_size_kb": 0, 00:12:47.280 "state": "online", 00:12:47.280 "raid_level": "raid1", 00:12:47.280 "superblock": true, 00:12:47.280 "num_base_bdevs": 2, 00:12:47.280 "num_base_bdevs_discovered": 2, 00:12:47.280 "num_base_bdevs_operational": 2, 00:12:47.280 "base_bdevs_list": [ 00:12:47.280 { 00:12:47.280 "name": "spare", 00:12:47.280 "uuid": "2048b15f-65e5-5ed9-82cb-74eb0c3b6794", 00:12:47.280 "is_configured": true, 00:12:47.280 "data_offset": 2048, 00:12:47.280 "data_size": 63488 00:12:47.280 }, 00:12:47.280 { 00:12:47.280 "name": "BaseBdev2", 00:12:47.280 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:47.280 "is_configured": true, 00:12:47.280 "data_offset": 2048, 00:12:47.280 "data_size": 63488 00:12:47.280 } 00:12:47.280 ] 00:12:47.280 }' 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.280 [2024-10-05 08:49:23.658201] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.280 "name": "raid_bdev1", 00:12:47.280 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:47.280 "strip_size_kb": 0, 00:12:47.280 "state": "online", 00:12:47.280 "raid_level": "raid1", 00:12:47.280 "superblock": true, 00:12:47.280 "num_base_bdevs": 2, 00:12:47.280 "num_base_bdevs_discovered": 1, 00:12:47.280 "num_base_bdevs_operational": 1, 00:12:47.280 "base_bdevs_list": [ 00:12:47.280 { 00:12:47.280 "name": null, 00:12:47.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.280 "is_configured": false, 00:12:47.280 "data_offset": 0, 00:12:47.280 "data_size": 63488 00:12:47.280 }, 00:12:47.280 { 00:12:47.280 "name": "BaseBdev2", 00:12:47.280 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:47.280 "is_configured": true, 00:12:47.280 "data_offset": 2048, 00:12:47.280 "data_size": 63488 00:12:47.280 } 00:12:47.280 ] 00:12:47.280 }' 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.280 08:49:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.850 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:47.850 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.850 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.850 [2024-10-05 08:49:24.117474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.850 [2024-10-05 08:49:24.117631] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:47.850 [2024-10-05 08:49:24.117648] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:47.850 [2024-10-05 08:49:24.117681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:47.850 [2024-10-05 08:49:24.132237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:47.850 08:49:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.850 08:49:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:47.850 [2024-10-05 08:49:24.134053] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:48.788 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.788 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.789 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.789 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.789 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.789 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.789 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.789 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.789 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.789 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.789 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.789 "name": "raid_bdev1", 00:12:48.789 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:48.789 "strip_size_kb": 0, 00:12:48.789 "state": "online", 00:12:48.789 "raid_level": "raid1", 00:12:48.789 "superblock": true, 00:12:48.789 "num_base_bdevs": 2, 00:12:48.789 "num_base_bdevs_discovered": 2, 00:12:48.789 "num_base_bdevs_operational": 2, 00:12:48.789 "process": { 00:12:48.789 "type": "rebuild", 00:12:48.789 "target": "spare", 00:12:48.789 "progress": { 00:12:48.789 "blocks": 20480, 00:12:48.789 "percent": 32 00:12:48.789 } 00:12:48.789 }, 00:12:48.789 "base_bdevs_list": [ 00:12:48.789 { 00:12:48.789 "name": "spare", 00:12:48.789 "uuid": "2048b15f-65e5-5ed9-82cb-74eb0c3b6794", 00:12:48.789 "is_configured": true, 00:12:48.789 "data_offset": 2048, 00:12:48.789 "data_size": 63488 00:12:48.789 }, 00:12:48.789 { 00:12:48.789 "name": "BaseBdev2", 00:12:48.789 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:48.789 "is_configured": true, 00:12:48.789 "data_offset": 2048, 00:12:48.789 "data_size": 63488 00:12:48.789 } 00:12:48.789 ] 00:12:48.789 }' 00:12:48.789 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.789 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.789 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.049 [2024-10-05 08:49:25.273485] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.049 [2024-10-05 08:49:25.338832] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:49.049 [2024-10-05 08:49:25.338934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.049 [2024-10-05 08:49:25.338950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:49.049 [2024-10-05 08:49:25.338974] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.049 "name": "raid_bdev1", 00:12:49.049 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:49.049 "strip_size_kb": 0, 00:12:49.049 "state": "online", 00:12:49.049 "raid_level": "raid1", 00:12:49.049 "superblock": true, 00:12:49.049 "num_base_bdevs": 2, 00:12:49.049 "num_base_bdevs_discovered": 1, 00:12:49.049 "num_base_bdevs_operational": 1, 00:12:49.049 "base_bdevs_list": [ 00:12:49.049 { 00:12:49.049 "name": null, 00:12:49.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.049 "is_configured": false, 00:12:49.049 "data_offset": 0, 00:12:49.049 "data_size": 63488 00:12:49.049 }, 00:12:49.049 { 00:12:49.049 "name": "BaseBdev2", 00:12:49.049 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:49.049 "is_configured": true, 00:12:49.049 "data_offset": 2048, 00:12:49.049 "data_size": 63488 00:12:49.049 } 00:12:49.049 ] 00:12:49.049 }' 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.049 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.309 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:49.309 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.309 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.309 [2024-10-05 08:49:25.741444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:49.309 [2024-10-05 08:49:25.741543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.309 [2024-10-05 08:49:25.741580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:49.309 [2024-10-05 08:49:25.741616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.309 [2024-10-05 08:49:25.742116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.309 [2024-10-05 08:49:25.742187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:49.309 [2024-10-05 08:49:25.742294] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:49.309 [2024-10-05 08:49:25.742335] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:49.309 [2024-10-05 08:49:25.742377] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:49.309 [2024-10-05 08:49:25.742457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:49.309 [2024-10-05 08:49:25.757129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:49.309 spare 00:12:49.309 08:49:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.309 [2024-10-05 08:49:25.758825] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:49.309 08:49:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.690 "name": "raid_bdev1", 00:12:50.690 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:50.690 "strip_size_kb": 0, 00:12:50.690 "state": "online", 00:12:50.690 "raid_level": "raid1", 00:12:50.690 "superblock": true, 00:12:50.690 "num_base_bdevs": 2, 00:12:50.690 "num_base_bdevs_discovered": 2, 00:12:50.690 "num_base_bdevs_operational": 2, 00:12:50.690 "process": { 00:12:50.690 "type": "rebuild", 00:12:50.690 "target": "spare", 00:12:50.690 "progress": { 00:12:50.690 "blocks": 20480, 00:12:50.690 "percent": 32 00:12:50.690 } 00:12:50.690 }, 00:12:50.690 "base_bdevs_list": [ 00:12:50.690 { 00:12:50.690 "name": "spare", 00:12:50.690 "uuid": "2048b15f-65e5-5ed9-82cb-74eb0c3b6794", 00:12:50.690 "is_configured": true, 00:12:50.690 "data_offset": 2048, 00:12:50.690 "data_size": 63488 00:12:50.690 }, 00:12:50.690 { 00:12:50.690 "name": "BaseBdev2", 00:12:50.690 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:50.690 "is_configured": true, 00:12:50.690 "data_offset": 2048, 00:12:50.690 "data_size": 63488 00:12:50.690 } 00:12:50.690 ] 00:12:50.690 }' 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.690 [2024-10-05 08:49:26.922593] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.690 [2024-10-05 08:49:26.963470] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:50.690 [2024-10-05 08:49:26.963519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.690 [2024-10-05 08:49:26.963535] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:50.690 [2024-10-05 08:49:26.963542] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.690 08:49:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.690 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.690 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.690 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.690 "name": "raid_bdev1", 00:12:50.690 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:50.690 "strip_size_kb": 0, 00:12:50.690 "state": "online", 00:12:50.690 "raid_level": "raid1", 00:12:50.690 "superblock": true, 00:12:50.690 "num_base_bdevs": 2, 00:12:50.690 "num_base_bdevs_discovered": 1, 00:12:50.690 "num_base_bdevs_operational": 1, 00:12:50.690 "base_bdevs_list": [ 00:12:50.690 { 00:12:50.690 "name": null, 00:12:50.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.690 "is_configured": false, 00:12:50.690 "data_offset": 0, 00:12:50.690 "data_size": 63488 00:12:50.690 }, 00:12:50.690 { 00:12:50.690 "name": "BaseBdev2", 00:12:50.690 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:50.690 "is_configured": true, 00:12:50.690 "data_offset": 2048, 00:12:50.690 "data_size": 63488 00:12:50.690 } 00:12:50.690 ] 00:12:50.690 }' 00:12:50.690 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.690 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:51.259 "name": "raid_bdev1", 00:12:51.259 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:51.259 "strip_size_kb": 0, 00:12:51.259 "state": "online", 00:12:51.259 "raid_level": "raid1", 00:12:51.259 "superblock": true, 00:12:51.259 "num_base_bdevs": 2, 00:12:51.259 "num_base_bdevs_discovered": 1, 00:12:51.259 "num_base_bdevs_operational": 1, 00:12:51.259 "base_bdevs_list": [ 00:12:51.259 { 00:12:51.259 "name": null, 00:12:51.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.259 "is_configured": false, 00:12:51.259 "data_offset": 0, 00:12:51.259 "data_size": 63488 00:12:51.259 }, 00:12:51.259 { 00:12:51.259 "name": "BaseBdev2", 00:12:51.259 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:51.259 "is_configured": true, 00:12:51.259 "data_offset": 2048, 00:12:51.259 "data_size": 63488 00:12:51.259 } 00:12:51.259 ] 00:12:51.259 }' 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:51.259 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:51.260 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:51.260 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.260 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.260 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.260 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:51.260 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.260 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.260 [2024-10-05 08:49:27.618215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:51.260 [2024-10-05 08:49:27.618305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.260 [2024-10-05 08:49:27.618349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:51.260 [2024-10-05 08:49:27.618359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.260 [2024-10-05 08:49:27.618787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.260 [2024-10-05 08:49:27.618803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:51.260 [2024-10-05 08:49:27.618877] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:51.260 [2024-10-05 08:49:27.618891] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:51.260 [2024-10-05 08:49:27.618903] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:51.260 [2024-10-05 08:49:27.618912] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:51.260 BaseBdev1 00:12:51.260 08:49:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.260 08:49:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.199 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.458 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.458 "name": "raid_bdev1", 00:12:52.458 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:52.458 "strip_size_kb": 0, 00:12:52.458 "state": "online", 00:12:52.458 "raid_level": "raid1", 00:12:52.458 "superblock": true, 00:12:52.458 "num_base_bdevs": 2, 00:12:52.458 "num_base_bdevs_discovered": 1, 00:12:52.458 "num_base_bdevs_operational": 1, 00:12:52.458 "base_bdevs_list": [ 00:12:52.458 { 00:12:52.458 "name": null, 00:12:52.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.458 "is_configured": false, 00:12:52.458 "data_offset": 0, 00:12:52.458 "data_size": 63488 00:12:52.458 }, 00:12:52.458 { 00:12:52.458 "name": "BaseBdev2", 00:12:52.458 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:52.458 "is_configured": true, 00:12:52.458 "data_offset": 2048, 00:12:52.459 "data_size": 63488 00:12:52.459 } 00:12:52.459 ] 00:12:52.459 }' 00:12:52.459 08:49:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.459 08:49:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.719 "name": "raid_bdev1", 00:12:52.719 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:52.719 "strip_size_kb": 0, 00:12:52.719 "state": "online", 00:12:52.719 "raid_level": "raid1", 00:12:52.719 "superblock": true, 00:12:52.719 "num_base_bdevs": 2, 00:12:52.719 "num_base_bdevs_discovered": 1, 00:12:52.719 "num_base_bdevs_operational": 1, 00:12:52.719 "base_bdevs_list": [ 00:12:52.719 { 00:12:52.719 "name": null, 00:12:52.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.719 "is_configured": false, 00:12:52.719 "data_offset": 0, 00:12:52.719 "data_size": 63488 00:12:52.719 }, 00:12:52.719 { 00:12:52.719 "name": "BaseBdev2", 00:12:52.719 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:52.719 "is_configured": true, 00:12:52.719 "data_offset": 2048, 00:12:52.719 "data_size": 63488 00:12:52.719 } 00:12:52.719 ] 00:12:52.719 }' 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.719 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.979 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.979 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:52.979 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:52.979 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:52.979 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:52.979 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.979 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:52.979 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.979 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:52.979 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.979 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.979 [2024-10-05 08:49:29.243664] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.979 [2024-10-05 08:49:29.243870] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:52.979 [2024-10-05 08:49:29.243935] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:52.979 request: 00:12:52.979 { 00:12:52.979 "base_bdev": "BaseBdev1", 00:12:52.979 "raid_bdev": "raid_bdev1", 00:12:52.979 "method": "bdev_raid_add_base_bdev", 00:12:52.979 "req_id": 1 00:12:52.979 } 00:12:52.979 Got JSON-RPC error response 00:12:52.979 response: 00:12:52.979 { 00:12:52.979 "code": -22, 00:12:52.979 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:52.979 } 00:12:52.980 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:52.980 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:52.980 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.980 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:52.980 08:49:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.980 08:49:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.919 "name": "raid_bdev1", 00:12:53.919 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:53.919 "strip_size_kb": 0, 00:12:53.919 "state": "online", 00:12:53.919 "raid_level": "raid1", 00:12:53.919 "superblock": true, 00:12:53.919 "num_base_bdevs": 2, 00:12:53.919 "num_base_bdevs_discovered": 1, 00:12:53.919 "num_base_bdevs_operational": 1, 00:12:53.919 "base_bdevs_list": [ 00:12:53.919 { 00:12:53.919 "name": null, 00:12:53.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.919 "is_configured": false, 00:12:53.919 "data_offset": 0, 00:12:53.919 "data_size": 63488 00:12:53.919 }, 00:12:53.919 { 00:12:53.919 "name": "BaseBdev2", 00:12:53.919 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:53.919 "is_configured": true, 00:12:53.919 "data_offset": 2048, 00:12:53.919 "data_size": 63488 00:12:53.919 } 00:12:53.919 ] 00:12:53.919 }' 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.919 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.487 "name": "raid_bdev1", 00:12:54.487 "uuid": "7ab7de59-dc97-4bd3-bcaa-bccaaf2548bf", 00:12:54.487 "strip_size_kb": 0, 00:12:54.487 "state": "online", 00:12:54.487 "raid_level": "raid1", 00:12:54.487 "superblock": true, 00:12:54.487 "num_base_bdevs": 2, 00:12:54.487 "num_base_bdevs_discovered": 1, 00:12:54.487 "num_base_bdevs_operational": 1, 00:12:54.487 "base_bdevs_list": [ 00:12:54.487 { 00:12:54.487 "name": null, 00:12:54.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.487 "is_configured": false, 00:12:54.487 "data_offset": 0, 00:12:54.487 "data_size": 63488 00:12:54.487 }, 00:12:54.487 { 00:12:54.487 "name": "BaseBdev2", 00:12:54.487 "uuid": "cf618ba5-c8d9-541d-b5f6-e2729b53acf2", 00:12:54.487 "is_configured": true, 00:12:54.487 "data_offset": 2048, 00:12:54.487 "data_size": 63488 00:12:54.487 } 00:12:54.487 ] 00:12:54.487 }' 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 73790 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73790 ']' 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 73790 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73790 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:54.487 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:54.487 killing process with pid 73790 00:12:54.487 Received shutdown signal, test time was about 60.000000 seconds 00:12:54.487 00:12:54.487 Latency(us) 00:12:54.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.488 =================================================================================================================== 00:12:54.488 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:54.488 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73790' 00:12:54.488 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 73790 00:12:54.488 [2024-10-05 08:49:30.876192] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:54.488 [2024-10-05 08:49:30.876319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.488 08:49:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 73790 00:12:54.488 [2024-10-05 08:49:30.876367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.488 [2024-10-05 08:49:30.876378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:54.748 [2024-10-05 08:49:31.156206] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:56.161 08:49:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:56.162 00:12:56.162 real 0m23.100s 00:12:56.162 user 0m27.795s 00:12:56.162 sys 0m3.919s 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.162 ************************************ 00:12:56.162 END TEST raid_rebuild_test_sb 00:12:56.162 ************************************ 00:12:56.162 08:49:32 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:56.162 08:49:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:56.162 08:49:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.162 08:49:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:56.162 ************************************ 00:12:56.162 START TEST raid_rebuild_test_io 00:12:56.162 ************************************ 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74382 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74382 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 74382 ']' 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:56.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:56.162 08:49:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.162 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:56.162 Zero copy mechanism will not be used. 00:12:56.162 [2024-10-05 08:49:32.537405] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:12:56.162 [2024-10-05 08:49:32.537530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74382 ] 00:12:56.422 [2024-10-05 08:49:32.699549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.422 [2024-10-05 08:49:32.885386] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.682 [2024-10-05 08:49:33.063002] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.682 [2024-10-05 08:49:33.063053] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.942 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:56.942 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:56.942 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:56.942 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:56.942 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.942 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.942 BaseBdev1_malloc 00:12:56.942 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.942 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:56.942 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.942 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.942 [2024-10-05 08:49:33.407277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:56.942 [2024-10-05 08:49:33.407351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.942 [2024-10-05 08:49:33.407380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:56.942 [2024-10-05 08:49:33.407396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.942 [2024-10-05 08:49:33.409530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.942 [2024-10-05 08:49:33.409605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:56.942 BaseBdev1 00:12:56.942 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.942 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.202 BaseBdev2_malloc 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.202 [2024-10-05 08:49:33.492187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:57.202 [2024-10-05 08:49:33.492245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.202 [2024-10-05 08:49:33.492263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:57.202 [2024-10-05 08:49:33.492275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.202 [2024-10-05 08:49:33.494193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.202 [2024-10-05 08:49:33.494232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:57.202 BaseBdev2 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.202 spare_malloc 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.202 spare_delay 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.202 [2024-10-05 08:49:33.558478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:57.202 [2024-10-05 08:49:33.558531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.202 [2024-10-05 08:49:33.558550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:57.202 [2024-10-05 08:49:33.558560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.202 [2024-10-05 08:49:33.560494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.202 [2024-10-05 08:49:33.560527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:57.202 spare 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.202 [2024-10-05 08:49:33.570503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:57.202 [2024-10-05 08:49:33.572188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:57.202 [2024-10-05 08:49:33.572276] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:57.202 [2024-10-05 08:49:33.572287] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:57.202 [2024-10-05 08:49:33.572551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:57.202 [2024-10-05 08:49:33.572705] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:57.202 [2024-10-05 08:49:33.572723] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:57.202 [2024-10-05 08:49:33.572877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.202 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.203 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.203 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.203 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.203 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.203 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.203 "name": "raid_bdev1", 00:12:57.203 "uuid": "40c6c2e1-638c-479f-982b-fe7a9c189012", 00:12:57.203 "strip_size_kb": 0, 00:12:57.203 "state": "online", 00:12:57.203 "raid_level": "raid1", 00:12:57.203 "superblock": false, 00:12:57.203 "num_base_bdevs": 2, 00:12:57.203 "num_base_bdevs_discovered": 2, 00:12:57.203 "num_base_bdevs_operational": 2, 00:12:57.203 "base_bdevs_list": [ 00:12:57.203 { 00:12:57.203 "name": "BaseBdev1", 00:12:57.203 "uuid": "bd56226a-1731-5677-b7db-fae19ce1908d", 00:12:57.203 "is_configured": true, 00:12:57.203 "data_offset": 0, 00:12:57.203 "data_size": 65536 00:12:57.203 }, 00:12:57.203 { 00:12:57.203 "name": "BaseBdev2", 00:12:57.203 "uuid": "43a8c263-ad3b-5ff0-9db8-afa402774534", 00:12:57.203 "is_configured": true, 00:12:57.203 "data_offset": 0, 00:12:57.203 "data_size": 65536 00:12:57.203 } 00:12:57.203 ] 00:12:57.203 }' 00:12:57.203 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.203 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.772 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 08:49:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 08:49:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:57.772 [2024-10-05 08:49:33.990029] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 [2024-10-05 08:49:34.077596] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.772 "name": "raid_bdev1", 00:12:57.772 "uuid": "40c6c2e1-638c-479f-982b-fe7a9c189012", 00:12:57.772 "strip_size_kb": 0, 00:12:57.772 "state": "online", 00:12:57.772 "raid_level": "raid1", 00:12:57.772 "superblock": false, 00:12:57.772 "num_base_bdevs": 2, 00:12:57.772 "num_base_bdevs_discovered": 1, 00:12:57.772 "num_base_bdevs_operational": 1, 00:12:57.772 "base_bdevs_list": [ 00:12:57.772 { 00:12:57.772 "name": null, 00:12:57.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.772 "is_configured": false, 00:12:57.772 "data_offset": 0, 00:12:57.772 "data_size": 65536 00:12:57.772 }, 00:12:57.772 { 00:12:57.772 "name": "BaseBdev2", 00:12:57.772 "uuid": "43a8c263-ad3b-5ff0-9db8-afa402774534", 00:12:57.772 "is_configured": true, 00:12:57.772 "data_offset": 0, 00:12:57.772 "data_size": 65536 00:12:57.772 } 00:12:57.772 ] 00:12:57.772 }' 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.772 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 [2024-10-05 08:49:34.157806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:57.772 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:57.772 Zero copy mechanism will not be used. 00:12:57.772 Running I/O for 60 seconds... 00:12:58.341 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:58.341 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.341 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.341 [2024-10-05 08:49:34.547386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:58.341 08:49:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.341 08:49:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:58.341 [2024-10-05 08:49:34.588026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:58.341 [2024-10-05 08:49:34.589814] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:58.341 [2024-10-05 08:49:34.708048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:58.341 [2024-10-05 08:49:34.708450] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:58.601 [2024-10-05 08:49:34.922562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:58.601 [2024-10-05 08:49:34.922880] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:59.121 213.00 IOPS, 639.00 MiB/s [2024-10-05 08:49:35.383825] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:59.121 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.121 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.121 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.121 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.121 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.121 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.380 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.380 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.380 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.380 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.380 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.380 "name": "raid_bdev1", 00:12:59.380 "uuid": "40c6c2e1-638c-479f-982b-fe7a9c189012", 00:12:59.380 "strip_size_kb": 0, 00:12:59.380 "state": "online", 00:12:59.380 "raid_level": "raid1", 00:12:59.380 "superblock": false, 00:12:59.380 "num_base_bdevs": 2, 00:12:59.380 "num_base_bdevs_discovered": 2, 00:12:59.380 "num_base_bdevs_operational": 2, 00:12:59.380 "process": { 00:12:59.381 "type": "rebuild", 00:12:59.381 "target": "spare", 00:12:59.381 "progress": { 00:12:59.381 "blocks": 12288, 00:12:59.381 "percent": 18 00:12:59.381 } 00:12:59.381 }, 00:12:59.381 "base_bdevs_list": [ 00:12:59.381 { 00:12:59.381 "name": "spare", 00:12:59.381 "uuid": "cbf1a182-288b-5d04-b4a4-9fcbc7b6054d", 00:12:59.381 "is_configured": true, 00:12:59.381 "data_offset": 0, 00:12:59.381 "data_size": 65536 00:12:59.381 }, 00:12:59.381 { 00:12:59.381 "name": "BaseBdev2", 00:12:59.381 "uuid": "43a8c263-ad3b-5ff0-9db8-afa402774534", 00:12:59.381 "is_configured": true, 00:12:59.381 "data_offset": 0, 00:12:59.381 "data_size": 65536 00:12:59.381 } 00:12:59.381 ] 00:12:59.381 }' 00:12:59.381 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.381 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.381 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.381 [2024-10-05 08:49:35.710767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:59.381 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.381 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:59.381 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.381 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.381 [2024-10-05 08:49:35.741929] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.381 [2024-10-05 08:49:35.829669] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:59.381 [2024-10-05 08:49:35.829964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:59.381 [2024-10-05 08:49:35.841951] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:59.641 [2024-10-05 08:49:35.855312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.641 [2024-10-05 08:49:35.855347] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:59.641 [2024-10-05 08:49:35.855361] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:59.641 [2024-10-05 08:49:35.888745] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.641 "name": "raid_bdev1", 00:12:59.641 "uuid": "40c6c2e1-638c-479f-982b-fe7a9c189012", 00:12:59.641 "strip_size_kb": 0, 00:12:59.641 "state": "online", 00:12:59.641 "raid_level": "raid1", 00:12:59.641 "superblock": false, 00:12:59.641 "num_base_bdevs": 2, 00:12:59.641 "num_base_bdevs_discovered": 1, 00:12:59.641 "num_base_bdevs_operational": 1, 00:12:59.641 "base_bdevs_list": [ 00:12:59.641 { 00:12:59.641 "name": null, 00:12:59.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.641 "is_configured": false, 00:12:59.641 "data_offset": 0, 00:12:59.641 "data_size": 65536 00:12:59.641 }, 00:12:59.641 { 00:12:59.641 "name": "BaseBdev2", 00:12:59.641 "uuid": "43a8c263-ad3b-5ff0-9db8-afa402774534", 00:12:59.641 "is_configured": true, 00:12:59.641 "data_offset": 0, 00:12:59.641 "data_size": 65536 00:12:59.641 } 00:12:59.641 ] 00:12:59.641 }' 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.641 08:49:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.900 178.00 IOPS, 534.00 MiB/s 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.900 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.900 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.900 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.900 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.900 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.900 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.900 08:49:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.900 08:49:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.900 08:49:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.900 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.900 "name": "raid_bdev1", 00:12:59.900 "uuid": "40c6c2e1-638c-479f-982b-fe7a9c189012", 00:12:59.900 "strip_size_kb": 0, 00:12:59.900 "state": "online", 00:12:59.900 "raid_level": "raid1", 00:12:59.900 "superblock": false, 00:12:59.900 "num_base_bdevs": 2, 00:12:59.900 "num_base_bdevs_discovered": 1, 00:12:59.900 "num_base_bdevs_operational": 1, 00:12:59.900 "base_bdevs_list": [ 00:12:59.900 { 00:12:59.900 "name": null, 00:12:59.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.900 "is_configured": false, 00:12:59.900 "data_offset": 0, 00:12:59.900 "data_size": 65536 00:12:59.900 }, 00:12:59.900 { 00:12:59.901 "name": "BaseBdev2", 00:12:59.901 "uuid": "43a8c263-ad3b-5ff0-9db8-afa402774534", 00:12:59.901 "is_configured": true, 00:12:59.901 "data_offset": 0, 00:12:59.901 "data_size": 65536 00:12:59.901 } 00:12:59.901 ] 00:12:59.901 }' 00:12:59.901 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.159 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.159 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.159 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.159 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:00.159 08:49:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.159 08:49:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.159 [2024-10-05 08:49:36.475086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.159 08:49:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.159 08:49:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:00.159 [2024-10-05 08:49:36.525007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:00.160 [2024-10-05 08:49:36.526759] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:00.419 [2024-10-05 08:49:36.643523] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:00.419 [2024-10-05 08:49:36.643883] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:00.419 [2024-10-05 08:49:36.845079] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:00.419 [2024-10-05 08:49:36.845313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:00.987 182.33 IOPS, 547.00 MiB/s [2024-10-05 08:49:37.168840] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:00.987 [2024-10-05 08:49:37.387969] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:00.987 [2024-10-05 08:49:37.388193] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.247 "name": "raid_bdev1", 00:13:01.247 "uuid": "40c6c2e1-638c-479f-982b-fe7a9c189012", 00:13:01.247 "strip_size_kb": 0, 00:13:01.247 "state": "online", 00:13:01.247 "raid_level": "raid1", 00:13:01.247 "superblock": false, 00:13:01.247 "num_base_bdevs": 2, 00:13:01.247 "num_base_bdevs_discovered": 2, 00:13:01.247 "num_base_bdevs_operational": 2, 00:13:01.247 "process": { 00:13:01.247 "type": "rebuild", 00:13:01.247 "target": "spare", 00:13:01.247 "progress": { 00:13:01.247 "blocks": 12288, 00:13:01.247 "percent": 18 00:13:01.247 } 00:13:01.247 }, 00:13:01.247 "base_bdevs_list": [ 00:13:01.247 { 00:13:01.247 "name": "spare", 00:13:01.247 "uuid": "cbf1a182-288b-5d04-b4a4-9fcbc7b6054d", 00:13:01.247 "is_configured": true, 00:13:01.247 "data_offset": 0, 00:13:01.247 "data_size": 65536 00:13:01.247 }, 00:13:01.247 { 00:13:01.247 "name": "BaseBdev2", 00:13:01.247 "uuid": "43a8c263-ad3b-5ff0-9db8-afa402774534", 00:13:01.247 "is_configured": true, 00:13:01.247 "data_offset": 0, 00:13:01.247 "data_size": 65536 00:13:01.247 } 00:13:01.247 ] 00:13:01.247 }' 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=409 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.247 "name": "raid_bdev1", 00:13:01.247 "uuid": "40c6c2e1-638c-479f-982b-fe7a9c189012", 00:13:01.247 "strip_size_kb": 0, 00:13:01.247 "state": "online", 00:13:01.247 "raid_level": "raid1", 00:13:01.247 "superblock": false, 00:13:01.247 "num_base_bdevs": 2, 00:13:01.247 "num_base_bdevs_discovered": 2, 00:13:01.247 "num_base_bdevs_operational": 2, 00:13:01.247 "process": { 00:13:01.247 "type": "rebuild", 00:13:01.247 "target": "spare", 00:13:01.247 "progress": { 00:13:01.247 "blocks": 14336, 00:13:01.247 "percent": 21 00:13:01.247 } 00:13:01.247 }, 00:13:01.247 "base_bdevs_list": [ 00:13:01.247 { 00:13:01.247 "name": "spare", 00:13:01.247 "uuid": "cbf1a182-288b-5d04-b4a4-9fcbc7b6054d", 00:13:01.247 "is_configured": true, 00:13:01.247 "data_offset": 0, 00:13:01.247 "data_size": 65536 00:13:01.247 }, 00:13:01.247 { 00:13:01.247 "name": "BaseBdev2", 00:13:01.247 "uuid": "43a8c263-ad3b-5ff0-9db8-afa402774534", 00:13:01.247 "is_configured": true, 00:13:01.247 "data_offset": 0, 00:13:01.247 "data_size": 65536 00:13:01.247 } 00:13:01.247 ] 00:13:01.247 }' 00:13:01.247 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.506 [2024-10-05 08:49:37.723343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:01.506 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:01.506 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.507 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:01.507 08:49:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:01.765 [2024-10-05 08:49:38.051505] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:01.765 156.75 IOPS, 470.25 MiB/s [2024-10-05 08:49:38.165744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:02.026 [2024-10-05 08:49:38.399523] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.593 "name": "raid_bdev1", 00:13:02.593 "uuid": "40c6c2e1-638c-479f-982b-fe7a9c189012", 00:13:02.593 "strip_size_kb": 0, 00:13:02.593 "state": "online", 00:13:02.593 "raid_level": "raid1", 00:13:02.593 "superblock": false, 00:13:02.593 "num_base_bdevs": 2, 00:13:02.593 "num_base_bdevs_discovered": 2, 00:13:02.593 "num_base_bdevs_operational": 2, 00:13:02.593 "process": { 00:13:02.593 "type": "rebuild", 00:13:02.593 "target": "spare", 00:13:02.593 "progress": { 00:13:02.593 "blocks": 32768, 00:13:02.593 "percent": 50 00:13:02.593 } 00:13:02.593 }, 00:13:02.593 "base_bdevs_list": [ 00:13:02.593 { 00:13:02.593 "name": "spare", 00:13:02.593 "uuid": "cbf1a182-288b-5d04-b4a4-9fcbc7b6054d", 00:13:02.593 "is_configured": true, 00:13:02.593 "data_offset": 0, 00:13:02.593 "data_size": 65536 00:13:02.593 }, 00:13:02.593 { 00:13:02.593 "name": "BaseBdev2", 00:13:02.593 "uuid": "43a8c263-ad3b-5ff0-9db8-afa402774534", 00:13:02.593 "is_configured": true, 00:13:02.593 "data_offset": 0, 00:13:02.593 "data_size": 65536 00:13:02.593 } 00:13:02.593 ] 00:13:02.593 }' 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.593 [2024-10-05 08:49:38.841430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.593 08:49:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:03.792 133.80 IOPS, 401.40 MiB/s 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.792 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.792 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.792 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.792 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.792 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.792 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.792 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.792 08:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.792 08:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.792 08:49:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.792 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.792 "name": "raid_bdev1", 00:13:03.792 "uuid": "40c6c2e1-638c-479f-982b-fe7a9c189012", 00:13:03.792 "strip_size_kb": 0, 00:13:03.792 "state": "online", 00:13:03.792 "raid_level": "raid1", 00:13:03.792 "superblock": false, 00:13:03.792 "num_base_bdevs": 2, 00:13:03.792 "num_base_bdevs_discovered": 2, 00:13:03.792 "num_base_bdevs_operational": 2, 00:13:03.792 "process": { 00:13:03.792 "type": "rebuild", 00:13:03.792 "target": "spare", 00:13:03.792 "progress": { 00:13:03.792 "blocks": 53248, 00:13:03.792 "percent": 81 00:13:03.792 } 00:13:03.792 }, 00:13:03.792 "base_bdevs_list": [ 00:13:03.792 { 00:13:03.792 "name": "spare", 00:13:03.792 "uuid": "cbf1a182-288b-5d04-b4a4-9fcbc7b6054d", 00:13:03.792 "is_configured": true, 00:13:03.792 "data_offset": 0, 00:13:03.792 "data_size": 65536 00:13:03.792 }, 00:13:03.792 { 00:13:03.792 "name": "BaseBdev2", 00:13:03.792 "uuid": "43a8c263-ad3b-5ff0-9db8-afa402774534", 00:13:03.792 "is_configured": true, 00:13:03.792 "data_offset": 0, 00:13:03.792 "data_size": 65536 00:13:03.792 } 00:13:03.792 ] 00:13:03.792 }' 00:13:03.793 08:49:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.793 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.793 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.793 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.793 08:49:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:03.793 [2024-10-05 08:49:40.138010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:04.412 119.00 IOPS, 357.00 MiB/s [2024-10-05 08:49:40.673232] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:04.412 [2024-10-05 08:49:40.778399] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:04.412 [2024-10-05 08:49:40.780609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.672 "name": "raid_bdev1", 00:13:04.672 "uuid": "40c6c2e1-638c-479f-982b-fe7a9c189012", 00:13:04.672 "strip_size_kb": 0, 00:13:04.672 "state": "online", 00:13:04.672 "raid_level": "raid1", 00:13:04.672 "superblock": false, 00:13:04.672 "num_base_bdevs": 2, 00:13:04.672 "num_base_bdevs_discovered": 2, 00:13:04.672 "num_base_bdevs_operational": 2, 00:13:04.672 "base_bdevs_list": [ 00:13:04.672 { 00:13:04.672 "name": "spare", 00:13:04.672 "uuid": "cbf1a182-288b-5d04-b4a4-9fcbc7b6054d", 00:13:04.672 "is_configured": true, 00:13:04.672 "data_offset": 0, 00:13:04.672 "data_size": 65536 00:13:04.672 }, 00:13:04.672 { 00:13:04.672 "name": "BaseBdev2", 00:13:04.672 "uuid": "43a8c263-ad3b-5ff0-9db8-afa402774534", 00:13:04.672 "is_configured": true, 00:13:04.672 "data_offset": 0, 00:13:04.672 "data_size": 65536 00:13:04.672 } 00:13:04.672 ] 00:13:04.672 }' 00:13:04.672 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.933 108.71 IOPS, 326.14 MiB/s 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.933 "name": "raid_bdev1", 00:13:04.933 "uuid": "40c6c2e1-638c-479f-982b-fe7a9c189012", 00:13:04.933 "strip_size_kb": 0, 00:13:04.933 "state": "online", 00:13:04.933 "raid_level": "raid1", 00:13:04.933 "superblock": false, 00:13:04.933 "num_base_bdevs": 2, 00:13:04.933 "num_base_bdevs_discovered": 2, 00:13:04.933 "num_base_bdevs_operational": 2, 00:13:04.933 "base_bdevs_list": [ 00:13:04.933 { 00:13:04.933 "name": "spare", 00:13:04.933 "uuid": "cbf1a182-288b-5d04-b4a4-9fcbc7b6054d", 00:13:04.933 "is_configured": true, 00:13:04.933 "data_offset": 0, 00:13:04.933 "data_size": 65536 00:13:04.933 }, 00:13:04.933 { 00:13:04.933 "name": "BaseBdev2", 00:13:04.933 "uuid": "43a8c263-ad3b-5ff0-9db8-afa402774534", 00:13:04.933 "is_configured": true, 00:13:04.933 "data_offset": 0, 00:13:04.933 "data_size": 65536 00:13:04.933 } 00:13:04.933 ] 00:13:04.933 }' 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.933 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.194 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.194 "name": "raid_bdev1", 00:13:05.194 "uuid": "40c6c2e1-638c-479f-982b-fe7a9c189012", 00:13:05.194 "strip_size_kb": 0, 00:13:05.194 "state": "online", 00:13:05.194 "raid_level": "raid1", 00:13:05.194 "superblock": false, 00:13:05.194 "num_base_bdevs": 2, 00:13:05.194 "num_base_bdevs_discovered": 2, 00:13:05.194 "num_base_bdevs_operational": 2, 00:13:05.194 "base_bdevs_list": [ 00:13:05.194 { 00:13:05.194 "name": "spare", 00:13:05.194 "uuid": "cbf1a182-288b-5d04-b4a4-9fcbc7b6054d", 00:13:05.194 "is_configured": true, 00:13:05.194 "data_offset": 0, 00:13:05.194 "data_size": 65536 00:13:05.194 }, 00:13:05.194 { 00:13:05.194 "name": "BaseBdev2", 00:13:05.194 "uuid": "43a8c263-ad3b-5ff0-9db8-afa402774534", 00:13:05.194 "is_configured": true, 00:13:05.194 "data_offset": 0, 00:13:05.194 "data_size": 65536 00:13:05.194 } 00:13:05.194 ] 00:13:05.194 }' 00:13:05.194 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.194 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.454 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.454 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.454 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.454 [2024-10-05 08:49:41.803245] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.454 [2024-10-05 08:49:41.803363] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.454 00:13:05.454 Latency(us) 00:13:05.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:05.454 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:05.454 raid_bdev1 : 7.71 101.87 305.60 0.00 0.00 13144.72 286.18 108062.85 00:13:05.454 =================================================================================================================== 00:13:05.454 Total : 101.87 305.60 0.00 0.00 13144.72 286.18 108062.85 00:13:05.454 [2024-10-05 08:49:41.872427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.454 [2024-10-05 08:49:41.872514] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.454 [2024-10-05 08:49:41.872605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.454 [2024-10-05 08:49:41.872655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:05.454 { 00:13:05.454 "results": [ 00:13:05.454 { 00:13:05.454 "job": "raid_bdev1", 00:13:05.454 "core_mask": "0x1", 00:13:05.454 "workload": "randrw", 00:13:05.454 "percentage": 50, 00:13:05.454 "status": "finished", 00:13:05.454 "queue_depth": 2, 00:13:05.454 "io_size": 3145728, 00:13:05.454 "runtime": 7.706157, 00:13:05.454 "iops": 101.8665983576509, 00:13:05.454 "mibps": 305.5997950729527, 00:13:05.454 "io_failed": 0, 00:13:05.454 "io_timeout": 0, 00:13:05.454 "avg_latency_us": 13144.716001446333, 00:13:05.454 "min_latency_us": 286.1834061135371, 00:13:05.454 "max_latency_us": 108062.85414847161 00:13:05.454 } 00:13:05.454 ], 00:13:05.454 "core_count": 1 00:13:05.454 } 00:13:05.454 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.454 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.454 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:05.454 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.454 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.454 08:49:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.714 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:05.714 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:05.715 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:05.715 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:05.715 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.715 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:05.715 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.715 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:05.715 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.715 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:05.715 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.715 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.715 08:49:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:05.715 /dev/nbd0 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.715 1+0 records in 00:13:05.715 1+0 records out 00:13:05.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458141 s, 8.9 MB/s 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.715 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:05.974 /dev/nbd1 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.975 1+0 records in 00:13:05.975 1+0 records out 00:13:05.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271654 s, 15.1 MB/s 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:05.975 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:06.234 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:06.234 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.234 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:06.234 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.234 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:06.234 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.234 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.495 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:06.755 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.755 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.755 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.755 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.755 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.755 08:49:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 74382 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 74382 ']' 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 74382 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74382 00:13:06.755 killing process with pid 74382 00:13:06.755 Received shutdown signal, test time was about 8.897931 seconds 00:13:06.755 00:13:06.755 Latency(us) 00:13:06.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.755 =================================================================================================================== 00:13:06.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74382' 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 74382 00:13:06.755 [2024-10-05 08:49:43.040519] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.755 08:49:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 74382 00:13:07.015 [2024-10-05 08:49:43.254231] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:08.396 00:13:08.396 real 0m12.054s 00:13:08.396 user 0m15.105s 00:13:08.396 sys 0m1.472s 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.396 ************************************ 00:13:08.396 END TEST raid_rebuild_test_io 00:13:08.396 ************************************ 00:13:08.396 08:49:44 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:08.396 08:49:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:08.396 08:49:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:08.396 08:49:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.396 ************************************ 00:13:08.396 START TEST raid_rebuild_test_sb_io 00:13:08.396 ************************************ 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=74682 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 74682 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 74682 ']' 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.396 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:08.397 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.397 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:08.397 08:49:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.397 [2024-10-05 08:49:44.671946] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:13:08.397 [2024-10-05 08:49:44.672162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:08.397 Zero copy mechanism will not be used. 00:13:08.397 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74682 ] 00:13:08.397 [2024-10-05 08:49:44.834242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.655 [2024-10-05 08:49:45.036079] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.914 [2024-10-05 08:49:45.228585] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.914 [2024-10-05 08:49:45.228721] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.174 BaseBdev1_malloc 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.174 [2024-10-05 08:49:45.562509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:09.174 [2024-10-05 08:49:45.562589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.174 [2024-10-05 08:49:45.562613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:09.174 [2024-10-05 08:49:45.562626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.174 [2024-10-05 08:49:45.564592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.174 [2024-10-05 08:49:45.564710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.174 BaseBdev1 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.174 BaseBdev2_malloc 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.174 [2024-10-05 08:49:45.625563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:09.174 [2024-10-05 08:49:45.625644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.174 [2024-10-05 08:49:45.625663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:09.174 [2024-10-05 08:49:45.625673] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.174 [2024-10-05 08:49:45.627631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.174 [2024-10-05 08:49:45.627668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:09.174 BaseBdev2 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.174 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.435 spare_malloc 00:13:09.435 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.435 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:09.435 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.435 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.435 spare_delay 00:13:09.435 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.435 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:09.435 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.435 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.435 [2024-10-05 08:49:45.692086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:09.435 [2024-10-05 08:49:45.692168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.435 [2024-10-05 08:49:45.692187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:09.435 [2024-10-05 08:49:45.692197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.435 [2024-10-05 08:49:45.694204] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.435 [2024-10-05 08:49:45.694312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:09.435 spare 00:13:09.435 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.436 [2024-10-05 08:49:45.704119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.436 [2024-10-05 08:49:45.705846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.436 [2024-10-05 08:49:45.706019] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:09.436 [2024-10-05 08:49:45.706035] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:09.436 [2024-10-05 08:49:45.706273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:09.436 [2024-10-05 08:49:45.706426] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:09.436 [2024-10-05 08:49:45.706434] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:09.436 [2024-10-05 08:49:45.706566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.436 "name": "raid_bdev1", 00:13:09.436 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:09.436 "strip_size_kb": 0, 00:13:09.436 "state": "online", 00:13:09.436 "raid_level": "raid1", 00:13:09.436 "superblock": true, 00:13:09.436 "num_base_bdevs": 2, 00:13:09.436 "num_base_bdevs_discovered": 2, 00:13:09.436 "num_base_bdevs_operational": 2, 00:13:09.436 "base_bdevs_list": [ 00:13:09.436 { 00:13:09.436 "name": "BaseBdev1", 00:13:09.436 "uuid": "ff776fb1-aaa8-5f3e-b466-f2fd97553f77", 00:13:09.436 "is_configured": true, 00:13:09.436 "data_offset": 2048, 00:13:09.436 "data_size": 63488 00:13:09.436 }, 00:13:09.436 { 00:13:09.436 "name": "BaseBdev2", 00:13:09.436 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:09.436 "is_configured": true, 00:13:09.436 "data_offset": 2048, 00:13:09.436 "data_size": 63488 00:13:09.436 } 00:13:09.436 ] 00:13:09.436 }' 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.436 08:49:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.695 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:09.695 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:09.695 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.695 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.695 [2024-10-05 08:49:46.155589] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.955 [2024-10-05 08:49:46.255126] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.955 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.955 "name": "raid_bdev1", 00:13:09.956 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:09.956 "strip_size_kb": 0, 00:13:09.956 "state": "online", 00:13:09.956 "raid_level": "raid1", 00:13:09.956 "superblock": true, 00:13:09.956 "num_base_bdevs": 2, 00:13:09.956 "num_base_bdevs_discovered": 1, 00:13:09.956 "num_base_bdevs_operational": 1, 00:13:09.956 "base_bdevs_list": [ 00:13:09.956 { 00:13:09.956 "name": null, 00:13:09.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.956 "is_configured": false, 00:13:09.956 "data_offset": 0, 00:13:09.956 "data_size": 63488 00:13:09.956 }, 00:13:09.956 { 00:13:09.956 "name": "BaseBdev2", 00:13:09.956 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:09.956 "is_configured": true, 00:13:09.956 "data_offset": 2048, 00:13:09.956 "data_size": 63488 00:13:09.956 } 00:13:09.956 ] 00:13:09.956 }' 00:13:09.956 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.956 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.956 [2024-10-05 08:49:46.346807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:09.956 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:09.956 Zero copy mechanism will not be used. 00:13:09.956 Running I/O for 60 seconds... 00:13:10.216 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:10.216 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.216 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.216 [2024-10-05 08:49:46.659096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:10.476 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.476 08:49:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:10.476 [2024-10-05 08:49:46.712825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:10.476 [2024-10-05 08:49:46.714773] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:10.476 [2024-10-05 08:49:46.821620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:10.476 [2024-10-05 08:49:46.822253] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:10.736 [2024-10-05 08:49:47.030913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:10.736 [2024-10-05 08:49:47.031294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:10.996 [2024-10-05 08:49:47.286586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:10.996 177.00 IOPS, 531.00 MiB/s [2024-10-05 08:49:47.396396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:11.256 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.256 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.256 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.256 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.256 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.256 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.256 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.256 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.256 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.256 [2024-10-05 08:49:47.714563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:11.516 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.516 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.516 "name": "raid_bdev1", 00:13:11.516 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:11.516 "strip_size_kb": 0, 00:13:11.516 "state": "online", 00:13:11.516 "raid_level": "raid1", 00:13:11.516 "superblock": true, 00:13:11.516 "num_base_bdevs": 2, 00:13:11.516 "num_base_bdevs_discovered": 2, 00:13:11.516 "num_base_bdevs_operational": 2, 00:13:11.516 "process": { 00:13:11.516 "type": "rebuild", 00:13:11.516 "target": "spare", 00:13:11.516 "progress": { 00:13:11.516 "blocks": 12288, 00:13:11.516 "percent": 19 00:13:11.516 } 00:13:11.516 }, 00:13:11.516 "base_bdevs_list": [ 00:13:11.516 { 00:13:11.516 "name": "spare", 00:13:11.516 "uuid": "289f84a3-16f6-5019-bc19-bcc7b0b0f9b7", 00:13:11.516 "is_configured": true, 00:13:11.516 "data_offset": 2048, 00:13:11.516 "data_size": 63488 00:13:11.516 }, 00:13:11.516 { 00:13:11.516 "name": "BaseBdev2", 00:13:11.516 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:11.516 "is_configured": true, 00:13:11.516 "data_offset": 2048, 00:13:11.516 "data_size": 63488 00:13:11.516 } 00:13:11.516 ] 00:13:11.516 }' 00:13:11.516 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.516 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.516 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.516 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.516 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:11.516 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.516 08:49:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.516 [2024-10-05 08:49:47.850112] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.516 [2024-10-05 08:49:47.942074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:11.776 [2024-10-05 08:49:48.054527] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:11.776 [2024-10-05 08:49:48.062538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.776 [2024-10-05 08:49:48.062628] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.776 [2024-10-05 08:49:48.062659] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:11.776 [2024-10-05 08:49:48.101368] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.776 "name": "raid_bdev1", 00:13:11.776 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:11.776 "strip_size_kb": 0, 00:13:11.776 "state": "online", 00:13:11.776 "raid_level": "raid1", 00:13:11.776 "superblock": true, 00:13:11.776 "num_base_bdevs": 2, 00:13:11.776 "num_base_bdevs_discovered": 1, 00:13:11.776 "num_base_bdevs_operational": 1, 00:13:11.776 "base_bdevs_list": [ 00:13:11.776 { 00:13:11.776 "name": null, 00:13:11.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.776 "is_configured": false, 00:13:11.776 "data_offset": 0, 00:13:11.776 "data_size": 63488 00:13:11.776 }, 00:13:11.776 { 00:13:11.776 "name": "BaseBdev2", 00:13:11.776 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:11.776 "is_configured": true, 00:13:11.776 "data_offset": 2048, 00:13:11.776 "data_size": 63488 00:13:11.776 } 00:13:11.776 ] 00:13:11.776 }' 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.776 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.300 143.50 IOPS, 430.50 MiB/s 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.300 "name": "raid_bdev1", 00:13:12.300 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:12.300 "strip_size_kb": 0, 00:13:12.300 "state": "online", 00:13:12.300 "raid_level": "raid1", 00:13:12.300 "superblock": true, 00:13:12.300 "num_base_bdevs": 2, 00:13:12.300 "num_base_bdevs_discovered": 1, 00:13:12.300 "num_base_bdevs_operational": 1, 00:13:12.300 "base_bdevs_list": [ 00:13:12.300 { 00:13:12.300 "name": null, 00:13:12.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.300 "is_configured": false, 00:13:12.300 "data_offset": 0, 00:13:12.300 "data_size": 63488 00:13:12.300 }, 00:13:12.300 { 00:13:12.300 "name": "BaseBdev2", 00:13:12.300 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:12.300 "is_configured": true, 00:13:12.300 "data_offset": 2048, 00:13:12.300 "data_size": 63488 00:13:12.300 } 00:13:12.300 ] 00:13:12.300 }' 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.300 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.580 [2024-10-05 08:49:48.781863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.580 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.580 08:49:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:12.580 [2024-10-05 08:49:48.820116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:12.580 [2024-10-05 08:49:48.821971] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.580 [2024-10-05 08:49:48.935330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:12.580 [2024-10-05 08:49:48.935932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:12.852 [2024-10-05 08:49:49.144995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:12.852 [2024-10-05 08:49:49.145303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:13.112 159.00 IOPS, 477.00 MiB/s [2024-10-05 08:49:49.375454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:13.372 [2024-10-05 08:49:49.585715] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:13.372 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.372 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.372 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.372 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.372 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.372 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.372 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.372 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.372 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.632 "name": "raid_bdev1", 00:13:13.632 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:13.632 "strip_size_kb": 0, 00:13:13.632 "state": "online", 00:13:13.632 "raid_level": "raid1", 00:13:13.632 "superblock": true, 00:13:13.632 "num_base_bdevs": 2, 00:13:13.632 "num_base_bdevs_discovered": 2, 00:13:13.632 "num_base_bdevs_operational": 2, 00:13:13.632 "process": { 00:13:13.632 "type": "rebuild", 00:13:13.632 "target": "spare", 00:13:13.632 "progress": { 00:13:13.632 "blocks": 12288, 00:13:13.632 "percent": 19 00:13:13.632 } 00:13:13.632 }, 00:13:13.632 "base_bdevs_list": [ 00:13:13.632 { 00:13:13.632 "name": "spare", 00:13:13.632 "uuid": "289f84a3-16f6-5019-bc19-bcc7b0b0f9b7", 00:13:13.632 "is_configured": true, 00:13:13.632 "data_offset": 2048, 00:13:13.632 "data_size": 63488 00:13:13.632 }, 00:13:13.632 { 00:13:13.632 "name": "BaseBdev2", 00:13:13.632 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:13.632 "is_configured": true, 00:13:13.632 "data_offset": 2048, 00:13:13.632 "data_size": 63488 00:13:13.632 } 00:13:13.632 ] 00:13:13.632 }' 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.632 [2024-10-05 08:49:49.923480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:13.632 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=421 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.632 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.633 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.633 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.633 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.633 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.633 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.633 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.633 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.633 08:49:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.633 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.633 "name": "raid_bdev1", 00:13:13.633 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:13.633 "strip_size_kb": 0, 00:13:13.633 "state": "online", 00:13:13.633 "raid_level": "raid1", 00:13:13.633 "superblock": true, 00:13:13.633 "num_base_bdevs": 2, 00:13:13.633 "num_base_bdevs_discovered": 2, 00:13:13.633 "num_base_bdevs_operational": 2, 00:13:13.633 "process": { 00:13:13.633 "type": "rebuild", 00:13:13.633 "target": "spare", 00:13:13.633 "progress": { 00:13:13.633 "blocks": 14336, 00:13:13.633 "percent": 22 00:13:13.633 } 00:13:13.633 }, 00:13:13.633 "base_bdevs_list": [ 00:13:13.633 { 00:13:13.633 "name": "spare", 00:13:13.633 "uuid": "289f84a3-16f6-5019-bc19-bcc7b0b0f9b7", 00:13:13.633 "is_configured": true, 00:13:13.633 "data_offset": 2048, 00:13:13.633 "data_size": 63488 00:13:13.633 }, 00:13:13.633 { 00:13:13.633 "name": "BaseBdev2", 00:13:13.633 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:13.633 "is_configured": true, 00:13:13.633 "data_offset": 2048, 00:13:13.633 "data_size": 63488 00:13:13.633 } 00:13:13.633 ] 00:13:13.633 }' 00:13:13.633 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.633 [2024-10-05 08:49:50.043512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:13.633 [2024-10-05 08:49:50.043897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:13.633 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.633 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.893 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.893 08:49:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:14.153 136.00 IOPS, 408.00 MiB/s [2024-10-05 08:49:50.504390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:14.722 [2024-10-05 08:49:51.066475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.722 "name": "raid_bdev1", 00:13:14.722 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:14.722 "strip_size_kb": 0, 00:13:14.722 "state": "online", 00:13:14.722 "raid_level": "raid1", 00:13:14.722 "superblock": true, 00:13:14.722 "num_base_bdevs": 2, 00:13:14.722 "num_base_bdevs_discovered": 2, 00:13:14.722 "num_base_bdevs_operational": 2, 00:13:14.722 "process": { 00:13:14.722 "type": "rebuild", 00:13:14.722 "target": "spare", 00:13:14.722 "progress": { 00:13:14.722 "blocks": 32768, 00:13:14.722 "percent": 51 00:13:14.722 } 00:13:14.722 }, 00:13:14.722 "base_bdevs_list": [ 00:13:14.722 { 00:13:14.722 "name": "spare", 00:13:14.722 "uuid": "289f84a3-16f6-5019-bc19-bcc7b0b0f9b7", 00:13:14.722 "is_configured": true, 00:13:14.722 "data_offset": 2048, 00:13:14.722 "data_size": 63488 00:13:14.722 }, 00:13:14.722 { 00:13:14.722 "name": "BaseBdev2", 00:13:14.722 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:14.722 "is_configured": true, 00:13:14.722 "data_offset": 2048, 00:13:14.722 "data_size": 63488 00:13:14.722 } 00:13:14.722 ] 00:13:14.722 }' 00:13:14.722 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.982 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.982 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.982 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.982 08:49:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:14.982 121.40 IOPS, 364.20 MiB/s [2024-10-05 08:49:51.393878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.922 "name": "raid_bdev1", 00:13:15.922 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:15.922 "strip_size_kb": 0, 00:13:15.922 "state": "online", 00:13:15.922 "raid_level": "raid1", 00:13:15.922 "superblock": true, 00:13:15.922 "num_base_bdevs": 2, 00:13:15.922 "num_base_bdevs_discovered": 2, 00:13:15.922 "num_base_bdevs_operational": 2, 00:13:15.922 "process": { 00:13:15.922 "type": "rebuild", 00:13:15.922 "target": "spare", 00:13:15.922 "progress": { 00:13:15.922 "blocks": 55296, 00:13:15.922 "percent": 87 00:13:15.922 } 00:13:15.922 }, 00:13:15.922 "base_bdevs_list": [ 00:13:15.922 { 00:13:15.922 "name": "spare", 00:13:15.922 "uuid": "289f84a3-16f6-5019-bc19-bcc7b0b0f9b7", 00:13:15.922 "is_configured": true, 00:13:15.922 "data_offset": 2048, 00:13:15.922 "data_size": 63488 00:13:15.922 }, 00:13:15.922 { 00:13:15.922 "name": "BaseBdev2", 00:13:15.922 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:15.922 "is_configured": true, 00:13:15.922 "data_offset": 2048, 00:13:15.922 "data_size": 63488 00:13:15.922 } 00:13:15.922 ] 00:13:15.922 }' 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.922 107.00 IOPS, 321.00 MiB/s 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.922 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.183 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.183 08:49:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:16.183 [2024-10-05 08:49:52.460015] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:16.443 [2024-10-05 08:49:52.680573] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:16.443 [2024-10-05 08:49:52.786395] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:16.443 [2024-10-05 08:49:52.788234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.012 98.00 IOPS, 294.00 MiB/s 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:17.012 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.012 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.012 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.012 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.012 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.012 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.012 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.012 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.012 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.012 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.272 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.272 "name": "raid_bdev1", 00:13:17.272 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:17.272 "strip_size_kb": 0, 00:13:17.272 "state": "online", 00:13:17.272 "raid_level": "raid1", 00:13:17.272 "superblock": true, 00:13:17.272 "num_base_bdevs": 2, 00:13:17.272 "num_base_bdevs_discovered": 2, 00:13:17.272 "num_base_bdevs_operational": 2, 00:13:17.272 "base_bdevs_list": [ 00:13:17.272 { 00:13:17.272 "name": "spare", 00:13:17.272 "uuid": "289f84a3-16f6-5019-bc19-bcc7b0b0f9b7", 00:13:17.272 "is_configured": true, 00:13:17.272 "data_offset": 2048, 00:13:17.272 "data_size": 63488 00:13:17.272 }, 00:13:17.272 { 00:13:17.272 "name": "BaseBdev2", 00:13:17.272 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:17.272 "is_configured": true, 00:13:17.272 "data_offset": 2048, 00:13:17.272 "data_size": 63488 00:13:17.272 } 00:13:17.272 ] 00:13:17.272 }' 00:13:17.272 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.272 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:17.272 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.272 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.273 "name": "raid_bdev1", 00:13:17.273 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:17.273 "strip_size_kb": 0, 00:13:17.273 "state": "online", 00:13:17.273 "raid_level": "raid1", 00:13:17.273 "superblock": true, 00:13:17.273 "num_base_bdevs": 2, 00:13:17.273 "num_base_bdevs_discovered": 2, 00:13:17.273 "num_base_bdevs_operational": 2, 00:13:17.273 "base_bdevs_list": [ 00:13:17.273 { 00:13:17.273 "name": "spare", 00:13:17.273 "uuid": "289f84a3-16f6-5019-bc19-bcc7b0b0f9b7", 00:13:17.273 "is_configured": true, 00:13:17.273 "data_offset": 2048, 00:13:17.273 "data_size": 63488 00:13:17.273 }, 00:13:17.273 { 00:13:17.273 "name": "BaseBdev2", 00:13:17.273 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:17.273 "is_configured": true, 00:13:17.273 "data_offset": 2048, 00:13:17.273 "data_size": 63488 00:13:17.273 } 00:13:17.273 ] 00:13:17.273 }' 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.273 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.533 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.533 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.533 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.533 "name": "raid_bdev1", 00:13:17.533 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:17.533 "strip_size_kb": 0, 00:13:17.533 "state": "online", 00:13:17.533 "raid_level": "raid1", 00:13:17.533 "superblock": true, 00:13:17.533 "num_base_bdevs": 2, 00:13:17.533 "num_base_bdevs_discovered": 2, 00:13:17.533 "num_base_bdevs_operational": 2, 00:13:17.533 "base_bdevs_list": [ 00:13:17.533 { 00:13:17.533 "name": "spare", 00:13:17.533 "uuid": "289f84a3-16f6-5019-bc19-bcc7b0b0f9b7", 00:13:17.533 "is_configured": true, 00:13:17.533 "data_offset": 2048, 00:13:17.533 "data_size": 63488 00:13:17.533 }, 00:13:17.533 { 00:13:17.533 "name": "BaseBdev2", 00:13:17.533 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:17.533 "is_configured": true, 00:13:17.533 "data_offset": 2048, 00:13:17.533 "data_size": 63488 00:13:17.533 } 00:13:17.533 ] 00:13:17.533 }' 00:13:17.533 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.533 08:49:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.793 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:17.793 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.793 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.793 [2024-10-05 08:49:54.141923] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:17.793 [2024-10-05 08:49:54.142066] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.793 00:13:17.793 Latency(us) 00:13:17.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.793 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:17.793 raid_bdev1 : 7.87 90.83 272.48 0.00 0.00 15052.44 327.32 114015.47 00:13:17.793 =================================================================================================================== 00:13:17.793 Total : 90.83 272.48 0.00 0.00 15052.44 327.32 114015.47 00:13:17.793 [2024-10-05 08:49:54.225712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.793 [2024-10-05 08:49:54.225807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.793 [2024-10-05 08:49:54.225897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.793 [2024-10-05 08:49:54.225966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:17.793 { 00:13:17.793 "results": [ 00:13:17.793 { 00:13:17.793 "job": "raid_bdev1", 00:13:17.793 "core_mask": "0x1", 00:13:17.793 "workload": "randrw", 00:13:17.793 "percentage": 50, 00:13:17.793 "status": "finished", 00:13:17.793 "queue_depth": 2, 00:13:17.793 "io_size": 3145728, 00:13:17.793 "runtime": 7.872093, 00:13:17.793 "iops": 90.82717899801234, 00:13:17.793 "mibps": 272.481536994037, 00:13:17.793 "io_failed": 0, 00:13:17.793 "io_timeout": 0, 00:13:17.793 "avg_latency_us": 15052.442896143159, 00:13:17.793 "min_latency_us": 327.32227074235806, 00:13:17.794 "max_latency_us": 114015.46899563319 00:13:17.794 } 00:13:17.794 ], 00:13:17.794 "core_count": 1 00:13:17.794 } 00:13:17.794 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.794 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.794 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:17.794 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.794 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.794 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:18.054 /dev/nbd0 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.054 1+0 records in 00:13:18.054 1+0 records out 00:13:18.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598101 s, 6.8 MB/s 00:13:18.054 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:18.315 /dev/nbd1 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.315 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.315 1+0 records in 00:13:18.315 1+0 records out 00:13:18.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552696 s, 7.4 MB/s 00:13:18.575 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.575 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:18.575 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.575 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.576 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:18.576 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.576 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.576 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:18.576 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:18.576 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.576 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:18.576 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.576 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:18.576 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.576 08:49:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.836 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:19.095 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:19.095 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:19.095 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:19.095 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.095 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.095 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.096 [2024-10-05 08:49:55.426468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:19.096 [2024-10-05 08:49:55.426528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:19.096 [2024-10-05 08:49:55.426564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:19.096 [2024-10-05 08:49:55.426575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:19.096 [2024-10-05 08:49:55.428678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:19.096 [2024-10-05 08:49:55.428719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:19.096 [2024-10-05 08:49:55.428807] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:19.096 [2024-10-05 08:49:55.428864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.096 [2024-10-05 08:49:55.429044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.096 spare 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.096 [2024-10-05 08:49:55.528965] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:19.096 [2024-10-05 08:49:55.529004] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:19.096 [2024-10-05 08:49:55.529297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:19.096 [2024-10-05 08:49:55.529477] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:19.096 [2024-10-05 08:49:55.529491] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:19.096 [2024-10-05 08:49:55.529664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.096 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.355 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.355 "name": "raid_bdev1", 00:13:19.355 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:19.355 "strip_size_kb": 0, 00:13:19.355 "state": "online", 00:13:19.355 "raid_level": "raid1", 00:13:19.355 "superblock": true, 00:13:19.355 "num_base_bdevs": 2, 00:13:19.355 "num_base_bdevs_discovered": 2, 00:13:19.355 "num_base_bdevs_operational": 2, 00:13:19.355 "base_bdevs_list": [ 00:13:19.355 { 00:13:19.355 "name": "spare", 00:13:19.355 "uuid": "289f84a3-16f6-5019-bc19-bcc7b0b0f9b7", 00:13:19.355 "is_configured": true, 00:13:19.355 "data_offset": 2048, 00:13:19.355 "data_size": 63488 00:13:19.355 }, 00:13:19.355 { 00:13:19.355 "name": "BaseBdev2", 00:13:19.355 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:19.355 "is_configured": true, 00:13:19.355 "data_offset": 2048, 00:13:19.355 "data_size": 63488 00:13:19.355 } 00:13:19.355 ] 00:13:19.355 }' 00:13:19.355 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.355 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.614 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:19.614 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.614 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:19.614 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:19.614 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.614 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.614 08:49:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.614 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.614 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.614 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.614 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.614 "name": "raid_bdev1", 00:13:19.614 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:19.614 "strip_size_kb": 0, 00:13:19.614 "state": "online", 00:13:19.614 "raid_level": "raid1", 00:13:19.614 "superblock": true, 00:13:19.614 "num_base_bdevs": 2, 00:13:19.614 "num_base_bdevs_discovered": 2, 00:13:19.614 "num_base_bdevs_operational": 2, 00:13:19.614 "base_bdevs_list": [ 00:13:19.614 { 00:13:19.614 "name": "spare", 00:13:19.614 "uuid": "289f84a3-16f6-5019-bc19-bcc7b0b0f9b7", 00:13:19.614 "is_configured": true, 00:13:19.614 "data_offset": 2048, 00:13:19.614 "data_size": 63488 00:13:19.614 }, 00:13:19.614 { 00:13:19.614 "name": "BaseBdev2", 00:13:19.614 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:19.614 "is_configured": true, 00:13:19.614 "data_offset": 2048, 00:13:19.614 "data_size": 63488 00:13:19.614 } 00:13:19.614 ] 00:13:19.614 }' 00:13:19.614 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.875 [2024-10-05 08:49:56.201285] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.875 "name": "raid_bdev1", 00:13:19.875 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:19.875 "strip_size_kb": 0, 00:13:19.875 "state": "online", 00:13:19.875 "raid_level": "raid1", 00:13:19.875 "superblock": true, 00:13:19.875 "num_base_bdevs": 2, 00:13:19.875 "num_base_bdevs_discovered": 1, 00:13:19.875 "num_base_bdevs_operational": 1, 00:13:19.875 "base_bdevs_list": [ 00:13:19.875 { 00:13:19.875 "name": null, 00:13:19.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.875 "is_configured": false, 00:13:19.875 "data_offset": 0, 00:13:19.875 "data_size": 63488 00:13:19.875 }, 00:13:19.875 { 00:13:19.875 "name": "BaseBdev2", 00:13:19.875 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:19.875 "is_configured": true, 00:13:19.875 "data_offset": 2048, 00:13:19.875 "data_size": 63488 00:13:19.875 } 00:13:19.875 ] 00:13:19.875 }' 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.875 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.539 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.539 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.539 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.539 [2024-10-05 08:49:56.664839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.539 [2024-10-05 08:49:56.665168] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:20.539 [2024-10-05 08:49:56.665187] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:20.539 [2024-10-05 08:49:56.665237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.539 [2024-10-05 08:49:56.680139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:20.539 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.539 08:49:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:20.539 [2024-10-05 08:49:56.682090] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.479 "name": "raid_bdev1", 00:13:21.479 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:21.479 "strip_size_kb": 0, 00:13:21.479 "state": "online", 00:13:21.479 "raid_level": "raid1", 00:13:21.479 "superblock": true, 00:13:21.479 "num_base_bdevs": 2, 00:13:21.479 "num_base_bdevs_discovered": 2, 00:13:21.479 "num_base_bdevs_operational": 2, 00:13:21.479 "process": { 00:13:21.479 "type": "rebuild", 00:13:21.479 "target": "spare", 00:13:21.479 "progress": { 00:13:21.479 "blocks": 20480, 00:13:21.479 "percent": 32 00:13:21.479 } 00:13:21.479 }, 00:13:21.479 "base_bdevs_list": [ 00:13:21.479 { 00:13:21.479 "name": "spare", 00:13:21.479 "uuid": "289f84a3-16f6-5019-bc19-bcc7b0b0f9b7", 00:13:21.479 "is_configured": true, 00:13:21.479 "data_offset": 2048, 00:13:21.479 "data_size": 63488 00:13:21.479 }, 00:13:21.479 { 00:13:21.479 "name": "BaseBdev2", 00:13:21.479 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:21.479 "is_configured": true, 00:13:21.479 "data_offset": 2048, 00:13:21.479 "data_size": 63488 00:13:21.479 } 00:13:21.479 ] 00:13:21.479 }' 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.479 [2024-10-05 08:49:57.849753] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.479 [2024-10-05 08:49:57.887280] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:21.479 [2024-10-05 08:49:57.887348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.479 [2024-10-05 08:49:57.887366] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:21.479 [2024-10-05 08:49:57.887374] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.479 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.739 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.739 "name": "raid_bdev1", 00:13:21.739 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:21.739 "strip_size_kb": 0, 00:13:21.739 "state": "online", 00:13:21.739 "raid_level": "raid1", 00:13:21.739 "superblock": true, 00:13:21.739 "num_base_bdevs": 2, 00:13:21.739 "num_base_bdevs_discovered": 1, 00:13:21.739 "num_base_bdevs_operational": 1, 00:13:21.739 "base_bdevs_list": [ 00:13:21.739 { 00:13:21.739 "name": null, 00:13:21.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.739 "is_configured": false, 00:13:21.739 "data_offset": 0, 00:13:21.739 "data_size": 63488 00:13:21.739 }, 00:13:21.739 { 00:13:21.739 "name": "BaseBdev2", 00:13:21.739 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:21.739 "is_configured": true, 00:13:21.739 "data_offset": 2048, 00:13:21.739 "data_size": 63488 00:13:21.739 } 00:13:21.739 ] 00:13:21.739 }' 00:13:21.739 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.739 08:49:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.005 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:22.005 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.005 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.005 [2024-10-05 08:49:58.366156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:22.005 [2024-10-05 08:49:58.366305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.005 [2024-10-05 08:49:58.366346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:22.005 [2024-10-05 08:49:58.366374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.005 [2024-10-05 08:49:58.366860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.005 [2024-10-05 08:49:58.366919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:22.005 [2024-10-05 08:49:58.367056] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:22.005 [2024-10-05 08:49:58.367097] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:22.005 [2024-10-05 08:49:58.367155] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:22.005 [2024-10-05 08:49:58.367206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:22.005 [2024-10-05 08:49:58.382332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:22.005 spare 00:13:22.005 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.005 [2024-10-05 08:49:58.384114] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.005 08:49:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:22.944 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.944 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.944 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.944 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.944 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.944 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.944 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.944 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.944 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.204 "name": "raid_bdev1", 00:13:23.204 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:23.204 "strip_size_kb": 0, 00:13:23.204 "state": "online", 00:13:23.204 "raid_level": "raid1", 00:13:23.204 "superblock": true, 00:13:23.204 "num_base_bdevs": 2, 00:13:23.204 "num_base_bdevs_discovered": 2, 00:13:23.204 "num_base_bdevs_operational": 2, 00:13:23.204 "process": { 00:13:23.204 "type": "rebuild", 00:13:23.204 "target": "spare", 00:13:23.204 "progress": { 00:13:23.204 "blocks": 20480, 00:13:23.204 "percent": 32 00:13:23.204 } 00:13:23.204 }, 00:13:23.204 "base_bdevs_list": [ 00:13:23.204 { 00:13:23.204 "name": "spare", 00:13:23.204 "uuid": "289f84a3-16f6-5019-bc19-bcc7b0b0f9b7", 00:13:23.204 "is_configured": true, 00:13:23.204 "data_offset": 2048, 00:13:23.204 "data_size": 63488 00:13:23.204 }, 00:13:23.204 { 00:13:23.204 "name": "BaseBdev2", 00:13:23.204 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:23.204 "is_configured": true, 00:13:23.204 "data_offset": 2048, 00:13:23.204 "data_size": 63488 00:13:23.204 } 00:13:23.204 ] 00:13:23.204 }' 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.204 [2024-10-05 08:49:59.532716] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.204 [2024-10-05 08:49:59.589091] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:23.204 [2024-10-05 08:49:59.589154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.204 [2024-10-05 08:49:59.589169] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:23.204 [2024-10-05 08:49:59.589178] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.204 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.205 "name": "raid_bdev1", 00:13:23.205 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:23.205 "strip_size_kb": 0, 00:13:23.205 "state": "online", 00:13:23.205 "raid_level": "raid1", 00:13:23.205 "superblock": true, 00:13:23.205 "num_base_bdevs": 2, 00:13:23.205 "num_base_bdevs_discovered": 1, 00:13:23.205 "num_base_bdevs_operational": 1, 00:13:23.205 "base_bdevs_list": [ 00:13:23.205 { 00:13:23.205 "name": null, 00:13:23.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.205 "is_configured": false, 00:13:23.205 "data_offset": 0, 00:13:23.205 "data_size": 63488 00:13:23.205 }, 00:13:23.205 { 00:13:23.205 "name": "BaseBdev2", 00:13:23.205 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:23.205 "is_configured": true, 00:13:23.205 "data_offset": 2048, 00:13:23.205 "data_size": 63488 00:13:23.205 } 00:13:23.205 ] 00:13:23.205 }' 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.205 08:49:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.774 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.774 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.774 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.774 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.774 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.774 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.774 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.774 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.774 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.774 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.774 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.774 "name": "raid_bdev1", 00:13:23.774 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:23.774 "strip_size_kb": 0, 00:13:23.775 "state": "online", 00:13:23.775 "raid_level": "raid1", 00:13:23.775 "superblock": true, 00:13:23.775 "num_base_bdevs": 2, 00:13:23.775 "num_base_bdevs_discovered": 1, 00:13:23.775 "num_base_bdevs_operational": 1, 00:13:23.775 "base_bdevs_list": [ 00:13:23.775 { 00:13:23.775 "name": null, 00:13:23.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.775 "is_configured": false, 00:13:23.775 "data_offset": 0, 00:13:23.775 "data_size": 63488 00:13:23.775 }, 00:13:23.775 { 00:13:23.775 "name": "BaseBdev2", 00:13:23.775 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:23.775 "is_configured": true, 00:13:23.775 "data_offset": 2048, 00:13:23.775 "data_size": 63488 00:13:23.775 } 00:13:23.775 ] 00:13:23.775 }' 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.775 [2024-10-05 08:50:00.218418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:23.775 [2024-10-05 08:50:00.218472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.775 [2024-10-05 08:50:00.218492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:23.775 [2024-10-05 08:50:00.218504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.775 [2024-10-05 08:50:00.218942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.775 [2024-10-05 08:50:00.218979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:23.775 [2024-10-05 08:50:00.219057] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:23.775 [2024-10-05 08:50:00.219073] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:23.775 [2024-10-05 08:50:00.219080] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:23.775 [2024-10-05 08:50:00.219091] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:23.775 BaseBdev1 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.775 08:50:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.154 "name": "raid_bdev1", 00:13:25.154 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:25.154 "strip_size_kb": 0, 00:13:25.154 "state": "online", 00:13:25.154 "raid_level": "raid1", 00:13:25.154 "superblock": true, 00:13:25.154 "num_base_bdevs": 2, 00:13:25.154 "num_base_bdevs_discovered": 1, 00:13:25.154 "num_base_bdevs_operational": 1, 00:13:25.154 "base_bdevs_list": [ 00:13:25.154 { 00:13:25.154 "name": null, 00:13:25.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.154 "is_configured": false, 00:13:25.154 "data_offset": 0, 00:13:25.154 "data_size": 63488 00:13:25.154 }, 00:13:25.154 { 00:13:25.154 "name": "BaseBdev2", 00:13:25.154 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:25.154 "is_configured": true, 00:13:25.154 "data_offset": 2048, 00:13:25.154 "data_size": 63488 00:13:25.154 } 00:13:25.154 ] 00:13:25.154 }' 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.154 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.417 "name": "raid_bdev1", 00:13:25.417 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:25.417 "strip_size_kb": 0, 00:13:25.417 "state": "online", 00:13:25.417 "raid_level": "raid1", 00:13:25.417 "superblock": true, 00:13:25.417 "num_base_bdevs": 2, 00:13:25.417 "num_base_bdevs_discovered": 1, 00:13:25.417 "num_base_bdevs_operational": 1, 00:13:25.417 "base_bdevs_list": [ 00:13:25.417 { 00:13:25.417 "name": null, 00:13:25.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.417 "is_configured": false, 00:13:25.417 "data_offset": 0, 00:13:25.417 "data_size": 63488 00:13:25.417 }, 00:13:25.417 { 00:13:25.417 "name": "BaseBdev2", 00:13:25.417 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:25.417 "is_configured": true, 00:13:25.417 "data_offset": 2048, 00:13:25.417 "data_size": 63488 00:13:25.417 } 00:13:25.417 ] 00:13:25.417 }' 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.417 [2024-10-05 08:50:01.836000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:25.417 [2024-10-05 08:50:01.836215] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:25.417 [2024-10-05 08:50:01.836270] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:25.417 request: 00:13:25.417 { 00:13:25.417 "base_bdev": "BaseBdev1", 00:13:25.417 "raid_bdev": "raid_bdev1", 00:13:25.417 "method": "bdev_raid_add_base_bdev", 00:13:25.417 "req_id": 1 00:13:25.417 } 00:13:25.417 Got JSON-RPC error response 00:13:25.417 response: 00:13:25.417 { 00:13:25.417 "code": -22, 00:13:25.417 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:25.417 } 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:25.417 08:50:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.797 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.797 "name": "raid_bdev1", 00:13:26.797 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:26.797 "strip_size_kb": 0, 00:13:26.797 "state": "online", 00:13:26.797 "raid_level": "raid1", 00:13:26.797 "superblock": true, 00:13:26.797 "num_base_bdevs": 2, 00:13:26.798 "num_base_bdevs_discovered": 1, 00:13:26.798 "num_base_bdevs_operational": 1, 00:13:26.798 "base_bdevs_list": [ 00:13:26.798 { 00:13:26.798 "name": null, 00:13:26.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.798 "is_configured": false, 00:13:26.798 "data_offset": 0, 00:13:26.798 "data_size": 63488 00:13:26.798 }, 00:13:26.798 { 00:13:26.798 "name": "BaseBdev2", 00:13:26.798 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:26.798 "is_configured": true, 00:13:26.798 "data_offset": 2048, 00:13:26.798 "data_size": 63488 00:13:26.798 } 00:13:26.798 ] 00:13:26.798 }' 00:13:26.798 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.798 08:50:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.058 "name": "raid_bdev1", 00:13:27.058 "uuid": "024934f0-63f7-4950-aacb-e6b915aac895", 00:13:27.058 "strip_size_kb": 0, 00:13:27.058 "state": "online", 00:13:27.058 "raid_level": "raid1", 00:13:27.058 "superblock": true, 00:13:27.058 "num_base_bdevs": 2, 00:13:27.058 "num_base_bdevs_discovered": 1, 00:13:27.058 "num_base_bdevs_operational": 1, 00:13:27.058 "base_bdevs_list": [ 00:13:27.058 { 00:13:27.058 "name": null, 00:13:27.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.058 "is_configured": false, 00:13:27.058 "data_offset": 0, 00:13:27.058 "data_size": 63488 00:13:27.058 }, 00:13:27.058 { 00:13:27.058 "name": "BaseBdev2", 00:13:27.058 "uuid": "8a0a9bb2-c7c3-5a0d-af8e-10bb52d71479", 00:13:27.058 "is_configured": true, 00:13:27.058 "data_offset": 2048, 00:13:27.058 "data_size": 63488 00:13:27.058 } 00:13:27.058 ] 00:13:27.058 }' 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 74682 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 74682 ']' 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 74682 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74682 00:13:27.058 killing process with pid 74682 00:13:27.058 Received shutdown signal, test time was about 17.159596 seconds 00:13:27.058 00:13:27.058 Latency(us) 00:13:27.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.058 =================================================================================================================== 00:13:27.058 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74682' 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 74682 00:13:27.058 [2024-10-05 08:50:03.475314] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:27.058 [2024-10-05 08:50:03.475430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.058 08:50:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 74682 00:13:27.058 [2024-10-05 08:50:03.475482] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.058 [2024-10-05 08:50:03.475493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:27.318 [2024-10-05 08:50:03.695438] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:28.700 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:28.700 00:13:28.700 real 0m20.350s 00:13:28.700 user 0m26.572s 00:13:28.700 sys 0m2.341s 00:13:28.700 ************************************ 00:13:28.700 END TEST raid_rebuild_test_sb_io 00:13:28.700 ************************************ 00:13:28.700 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.700 08:50:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.700 08:50:04 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:28.700 08:50:04 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:28.700 08:50:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:28.700 08:50:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.700 08:50:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:28.700 ************************************ 00:13:28.700 START TEST raid_rebuild_test 00:13:28.700 ************************************ 00:13:28.700 08:50:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:13:28.700 08:50:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:28.700 08:50:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:28.700 08:50:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:28.700 08:50:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:28.700 08:50:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75256 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75256 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75256 ']' 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:28.700 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.700 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:28.700 Zero copy mechanism will not be used. 00:13:28.700 [2024-10-05 08:50:05.110916] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:13:28.700 [2024-10-05 08:50:05.111171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75256 ] 00:13:28.959 [2024-10-05 08:50:05.279925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.221 [2024-10-05 08:50:05.476313] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.221 [2024-10-05 08:50:05.664963] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.221 [2024-10-05 08:50:05.665091] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.481 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.481 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:29.481 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.481 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.481 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.481 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.741 BaseBdev1_malloc 00:13:29.741 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.741 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:29.741 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.741 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.741 [2024-10-05 08:50:05.972925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:29.741 [2024-10-05 08:50:05.973023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.741 [2024-10-05 08:50:05.973048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:29.741 [2024-10-05 08:50:05.973062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.741 [2024-10-05 08:50:05.975091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.741 [2024-10-05 08:50:05.975126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.741 BaseBdev1 00:13:29.741 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.741 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.741 08:50:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:29.741 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.741 08:50:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.741 BaseBdev2_malloc 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.741 [2024-10-05 08:50:06.054492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:29.741 [2024-10-05 08:50:06.054632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.741 [2024-10-05 08:50:06.054671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:29.741 [2024-10-05 08:50:06.054684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.741 [2024-10-05 08:50:06.056640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.741 [2024-10-05 08:50:06.056677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:29.741 BaseBdev2 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.741 BaseBdev3_malloc 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.741 [2024-10-05 08:50:06.107460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:29.741 [2024-10-05 08:50:06.107515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.741 [2024-10-05 08:50:06.107551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:29.741 [2024-10-05 08:50:06.107561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.741 [2024-10-05 08:50:06.109510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.741 [2024-10-05 08:50:06.109560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:29.741 BaseBdev3 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.741 BaseBdev4_malloc 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.741 [2024-10-05 08:50:06.160503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:29.741 [2024-10-05 08:50:06.160555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.741 [2024-10-05 08:50:06.160588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:29.741 [2024-10-05 08:50:06.160598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.741 [2024-10-05 08:50:06.162587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.741 [2024-10-05 08:50:06.162627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:29.741 BaseBdev4 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.741 spare_malloc 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.741 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.001 spare_delay 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.001 [2024-10-05 08:50:06.226190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:30.001 [2024-10-05 08:50:06.226317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.001 [2024-10-05 08:50:06.226339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:30.001 [2024-10-05 08:50:06.226350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.001 [2024-10-05 08:50:06.228318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.001 [2024-10-05 08:50:06.228355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:30.001 spare 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.001 [2024-10-05 08:50:06.238227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.001 [2024-10-05 08:50:06.239836] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.001 [2024-10-05 08:50:06.239902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.001 [2024-10-05 08:50:06.239949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:30.001 [2024-10-05 08:50:06.240028] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:30.001 [2024-10-05 08:50:06.240038] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:30.001 [2024-10-05 08:50:06.240271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:30.001 [2024-10-05 08:50:06.240416] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:30.001 [2024-10-05 08:50:06.240426] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:30.001 [2024-10-05 08:50:06.240564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.001 "name": "raid_bdev1", 00:13:30.001 "uuid": "be726d4a-e786-4914-b1f9-db3ff528c6a7", 00:13:30.001 "strip_size_kb": 0, 00:13:30.001 "state": "online", 00:13:30.001 "raid_level": "raid1", 00:13:30.001 "superblock": false, 00:13:30.001 "num_base_bdevs": 4, 00:13:30.001 "num_base_bdevs_discovered": 4, 00:13:30.001 "num_base_bdevs_operational": 4, 00:13:30.001 "base_bdevs_list": [ 00:13:30.001 { 00:13:30.001 "name": "BaseBdev1", 00:13:30.001 "uuid": "0485c8ed-e09a-50c1-95ea-9a5241aacad5", 00:13:30.001 "is_configured": true, 00:13:30.001 "data_offset": 0, 00:13:30.001 "data_size": 65536 00:13:30.001 }, 00:13:30.001 { 00:13:30.001 "name": "BaseBdev2", 00:13:30.001 "uuid": "61ad4172-84d0-5e1d-9108-5e376f389fb4", 00:13:30.001 "is_configured": true, 00:13:30.001 "data_offset": 0, 00:13:30.001 "data_size": 65536 00:13:30.001 }, 00:13:30.001 { 00:13:30.001 "name": "BaseBdev3", 00:13:30.001 "uuid": "c485c254-cf0f-586b-ac78-4cf3dd47dfca", 00:13:30.001 "is_configured": true, 00:13:30.001 "data_offset": 0, 00:13:30.001 "data_size": 65536 00:13:30.001 }, 00:13:30.001 { 00:13:30.001 "name": "BaseBdev4", 00:13:30.001 "uuid": "b39d2215-5241-599f-8a62-076b3ba0397b", 00:13:30.001 "is_configured": true, 00:13:30.001 "data_offset": 0, 00:13:30.001 "data_size": 65536 00:13:30.001 } 00:13:30.001 ] 00:13:30.001 }' 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.001 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.261 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:30.261 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:30.261 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.261 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.261 [2024-10-05 08:50:06.693688] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.261 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.521 08:50:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:30.521 [2024-10-05 08:50:06.965031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:30.521 /dev/nbd0 00:13:30.780 08:50:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.780 1+0 records in 00:13:30.780 1+0 records out 00:13:30.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00055968 s, 7.3 MB/s 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.780 08:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:30.781 08:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:30.781 08:50:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:37.365 65536+0 records in 00:13:37.365 65536+0 records out 00:13:37.365 33554432 bytes (34 MB, 32 MiB) copied, 5.79437 s, 5.8 MB/s 00:13:37.365 08:50:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:37.365 08:50:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.365 08:50:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:37.365 08:50:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.365 08:50:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:37.365 08:50:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.365 08:50:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:37.365 [2024-10-05 08:50:13.033788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.365 [2024-10-05 08:50:13.061841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.365 "name": "raid_bdev1", 00:13:37.365 "uuid": "be726d4a-e786-4914-b1f9-db3ff528c6a7", 00:13:37.365 "strip_size_kb": 0, 00:13:37.365 "state": "online", 00:13:37.365 "raid_level": "raid1", 00:13:37.365 "superblock": false, 00:13:37.365 "num_base_bdevs": 4, 00:13:37.365 "num_base_bdevs_discovered": 3, 00:13:37.365 "num_base_bdevs_operational": 3, 00:13:37.365 "base_bdevs_list": [ 00:13:37.365 { 00:13:37.365 "name": null, 00:13:37.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.365 "is_configured": false, 00:13:37.365 "data_offset": 0, 00:13:37.365 "data_size": 65536 00:13:37.365 }, 00:13:37.365 { 00:13:37.365 "name": "BaseBdev2", 00:13:37.365 "uuid": "61ad4172-84d0-5e1d-9108-5e376f389fb4", 00:13:37.365 "is_configured": true, 00:13:37.365 "data_offset": 0, 00:13:37.365 "data_size": 65536 00:13:37.365 }, 00:13:37.365 { 00:13:37.365 "name": "BaseBdev3", 00:13:37.365 "uuid": "c485c254-cf0f-586b-ac78-4cf3dd47dfca", 00:13:37.365 "is_configured": true, 00:13:37.365 "data_offset": 0, 00:13:37.365 "data_size": 65536 00:13:37.365 }, 00:13:37.365 { 00:13:37.365 "name": "BaseBdev4", 00:13:37.365 "uuid": "b39d2215-5241-599f-8a62-076b3ba0397b", 00:13:37.365 "is_configured": true, 00:13:37.365 "data_offset": 0, 00:13:37.365 "data_size": 65536 00:13:37.365 } 00:13:37.365 ] 00:13:37.365 }' 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.365 [2024-10-05 08:50:13.473082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.365 [2024-10-05 08:50:13.485945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.365 08:50:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:37.365 [2024-10-05 08:50:13.487733] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.307 "name": "raid_bdev1", 00:13:38.307 "uuid": "be726d4a-e786-4914-b1f9-db3ff528c6a7", 00:13:38.307 "strip_size_kb": 0, 00:13:38.307 "state": "online", 00:13:38.307 "raid_level": "raid1", 00:13:38.307 "superblock": false, 00:13:38.307 "num_base_bdevs": 4, 00:13:38.307 "num_base_bdevs_discovered": 4, 00:13:38.307 "num_base_bdevs_operational": 4, 00:13:38.307 "process": { 00:13:38.307 "type": "rebuild", 00:13:38.307 "target": "spare", 00:13:38.307 "progress": { 00:13:38.307 "blocks": 20480, 00:13:38.307 "percent": 31 00:13:38.307 } 00:13:38.307 }, 00:13:38.307 "base_bdevs_list": [ 00:13:38.307 { 00:13:38.307 "name": "spare", 00:13:38.307 "uuid": "b9e69b22-d3cd-50d3-86b4-1e058cd1aa79", 00:13:38.307 "is_configured": true, 00:13:38.307 "data_offset": 0, 00:13:38.307 "data_size": 65536 00:13:38.307 }, 00:13:38.307 { 00:13:38.307 "name": "BaseBdev2", 00:13:38.307 "uuid": "61ad4172-84d0-5e1d-9108-5e376f389fb4", 00:13:38.307 "is_configured": true, 00:13:38.307 "data_offset": 0, 00:13:38.307 "data_size": 65536 00:13:38.307 }, 00:13:38.307 { 00:13:38.307 "name": "BaseBdev3", 00:13:38.307 "uuid": "c485c254-cf0f-586b-ac78-4cf3dd47dfca", 00:13:38.307 "is_configured": true, 00:13:38.307 "data_offset": 0, 00:13:38.307 "data_size": 65536 00:13:38.307 }, 00:13:38.307 { 00:13:38.307 "name": "BaseBdev4", 00:13:38.307 "uuid": "b39d2215-5241-599f-8a62-076b3ba0397b", 00:13:38.307 "is_configured": true, 00:13:38.307 "data_offset": 0, 00:13:38.307 "data_size": 65536 00:13:38.307 } 00:13:38.307 ] 00:13:38.307 }' 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.307 [2024-10-05 08:50:14.644168] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.307 [2024-10-05 08:50:14.692603] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:38.307 [2024-10-05 08:50:14.692662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.307 [2024-10-05 08:50:14.692678] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.307 [2024-10-05 08:50:14.692687] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.307 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.308 "name": "raid_bdev1", 00:13:38.308 "uuid": "be726d4a-e786-4914-b1f9-db3ff528c6a7", 00:13:38.308 "strip_size_kb": 0, 00:13:38.308 "state": "online", 00:13:38.308 "raid_level": "raid1", 00:13:38.308 "superblock": false, 00:13:38.308 "num_base_bdevs": 4, 00:13:38.308 "num_base_bdevs_discovered": 3, 00:13:38.308 "num_base_bdevs_operational": 3, 00:13:38.308 "base_bdevs_list": [ 00:13:38.308 { 00:13:38.308 "name": null, 00:13:38.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.308 "is_configured": false, 00:13:38.308 "data_offset": 0, 00:13:38.308 "data_size": 65536 00:13:38.308 }, 00:13:38.308 { 00:13:38.308 "name": "BaseBdev2", 00:13:38.308 "uuid": "61ad4172-84d0-5e1d-9108-5e376f389fb4", 00:13:38.308 "is_configured": true, 00:13:38.308 "data_offset": 0, 00:13:38.308 "data_size": 65536 00:13:38.308 }, 00:13:38.308 { 00:13:38.308 "name": "BaseBdev3", 00:13:38.308 "uuid": "c485c254-cf0f-586b-ac78-4cf3dd47dfca", 00:13:38.308 "is_configured": true, 00:13:38.308 "data_offset": 0, 00:13:38.308 "data_size": 65536 00:13:38.308 }, 00:13:38.308 { 00:13:38.308 "name": "BaseBdev4", 00:13:38.308 "uuid": "b39d2215-5241-599f-8a62-076b3ba0397b", 00:13:38.308 "is_configured": true, 00:13:38.308 "data_offset": 0, 00:13:38.308 "data_size": 65536 00:13:38.308 } 00:13:38.308 ] 00:13:38.308 }' 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.308 08:50:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.878 "name": "raid_bdev1", 00:13:38.878 "uuid": "be726d4a-e786-4914-b1f9-db3ff528c6a7", 00:13:38.878 "strip_size_kb": 0, 00:13:38.878 "state": "online", 00:13:38.878 "raid_level": "raid1", 00:13:38.878 "superblock": false, 00:13:38.878 "num_base_bdevs": 4, 00:13:38.878 "num_base_bdevs_discovered": 3, 00:13:38.878 "num_base_bdevs_operational": 3, 00:13:38.878 "base_bdevs_list": [ 00:13:38.878 { 00:13:38.878 "name": null, 00:13:38.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.878 "is_configured": false, 00:13:38.878 "data_offset": 0, 00:13:38.878 "data_size": 65536 00:13:38.878 }, 00:13:38.878 { 00:13:38.878 "name": "BaseBdev2", 00:13:38.878 "uuid": "61ad4172-84d0-5e1d-9108-5e376f389fb4", 00:13:38.878 "is_configured": true, 00:13:38.878 "data_offset": 0, 00:13:38.878 "data_size": 65536 00:13:38.878 }, 00:13:38.878 { 00:13:38.878 "name": "BaseBdev3", 00:13:38.878 "uuid": "c485c254-cf0f-586b-ac78-4cf3dd47dfca", 00:13:38.878 "is_configured": true, 00:13:38.878 "data_offset": 0, 00:13:38.878 "data_size": 65536 00:13:38.878 }, 00:13:38.878 { 00:13:38.878 "name": "BaseBdev4", 00:13:38.878 "uuid": "b39d2215-5241-599f-8a62-076b3ba0397b", 00:13:38.878 "is_configured": true, 00:13:38.878 "data_offset": 0, 00:13:38.878 "data_size": 65536 00:13:38.878 } 00:13:38.878 ] 00:13:38.878 }' 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.878 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.138 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.138 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:39.138 08:50:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.138 08:50:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.138 [2024-10-05 08:50:15.363133] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.138 [2024-10-05 08:50:15.376638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:39.138 08:50:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.138 08:50:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:39.138 [2024-10-05 08:50:15.378472] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.077 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.077 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.077 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.077 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.077 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.077 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.077 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.077 08:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.077 08:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.077 08:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.077 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.077 "name": "raid_bdev1", 00:13:40.077 "uuid": "be726d4a-e786-4914-b1f9-db3ff528c6a7", 00:13:40.077 "strip_size_kb": 0, 00:13:40.077 "state": "online", 00:13:40.077 "raid_level": "raid1", 00:13:40.077 "superblock": false, 00:13:40.077 "num_base_bdevs": 4, 00:13:40.077 "num_base_bdevs_discovered": 4, 00:13:40.077 "num_base_bdevs_operational": 4, 00:13:40.077 "process": { 00:13:40.077 "type": "rebuild", 00:13:40.077 "target": "spare", 00:13:40.077 "progress": { 00:13:40.077 "blocks": 20480, 00:13:40.077 "percent": 31 00:13:40.077 } 00:13:40.077 }, 00:13:40.077 "base_bdevs_list": [ 00:13:40.077 { 00:13:40.077 "name": "spare", 00:13:40.077 "uuid": "b9e69b22-d3cd-50d3-86b4-1e058cd1aa79", 00:13:40.077 "is_configured": true, 00:13:40.077 "data_offset": 0, 00:13:40.077 "data_size": 65536 00:13:40.077 }, 00:13:40.077 { 00:13:40.077 "name": "BaseBdev2", 00:13:40.077 "uuid": "61ad4172-84d0-5e1d-9108-5e376f389fb4", 00:13:40.077 "is_configured": true, 00:13:40.077 "data_offset": 0, 00:13:40.077 "data_size": 65536 00:13:40.077 }, 00:13:40.078 { 00:13:40.078 "name": "BaseBdev3", 00:13:40.078 "uuid": "c485c254-cf0f-586b-ac78-4cf3dd47dfca", 00:13:40.078 "is_configured": true, 00:13:40.078 "data_offset": 0, 00:13:40.078 "data_size": 65536 00:13:40.078 }, 00:13:40.078 { 00:13:40.078 "name": "BaseBdev4", 00:13:40.078 "uuid": "b39d2215-5241-599f-8a62-076b3ba0397b", 00:13:40.078 "is_configured": true, 00:13:40.078 "data_offset": 0, 00:13:40.078 "data_size": 65536 00:13:40.078 } 00:13:40.078 ] 00:13:40.078 }' 00:13:40.078 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.078 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.078 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.078 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.078 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:40.078 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:40.078 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:40.078 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:40.078 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:40.078 08:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.078 08:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.078 [2024-10-05 08:50:16.546281] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:40.337 [2024-10-05 08:50:16.583200] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.337 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.338 "name": "raid_bdev1", 00:13:40.338 "uuid": "be726d4a-e786-4914-b1f9-db3ff528c6a7", 00:13:40.338 "strip_size_kb": 0, 00:13:40.338 "state": "online", 00:13:40.338 "raid_level": "raid1", 00:13:40.338 "superblock": false, 00:13:40.338 "num_base_bdevs": 4, 00:13:40.338 "num_base_bdevs_discovered": 3, 00:13:40.338 "num_base_bdevs_operational": 3, 00:13:40.338 "process": { 00:13:40.338 "type": "rebuild", 00:13:40.338 "target": "spare", 00:13:40.338 "progress": { 00:13:40.338 "blocks": 24576, 00:13:40.338 "percent": 37 00:13:40.338 } 00:13:40.338 }, 00:13:40.338 "base_bdevs_list": [ 00:13:40.338 { 00:13:40.338 "name": "spare", 00:13:40.338 "uuid": "b9e69b22-d3cd-50d3-86b4-1e058cd1aa79", 00:13:40.338 "is_configured": true, 00:13:40.338 "data_offset": 0, 00:13:40.338 "data_size": 65536 00:13:40.338 }, 00:13:40.338 { 00:13:40.338 "name": null, 00:13:40.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.338 "is_configured": false, 00:13:40.338 "data_offset": 0, 00:13:40.338 "data_size": 65536 00:13:40.338 }, 00:13:40.338 { 00:13:40.338 "name": "BaseBdev3", 00:13:40.338 "uuid": "c485c254-cf0f-586b-ac78-4cf3dd47dfca", 00:13:40.338 "is_configured": true, 00:13:40.338 "data_offset": 0, 00:13:40.338 "data_size": 65536 00:13:40.338 }, 00:13:40.338 { 00:13:40.338 "name": "BaseBdev4", 00:13:40.338 "uuid": "b39d2215-5241-599f-8a62-076b3ba0397b", 00:13:40.338 "is_configured": true, 00:13:40.338 "data_offset": 0, 00:13:40.338 "data_size": 65536 00:13:40.338 } 00:13:40.338 ] 00:13:40.338 }' 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=448 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.338 "name": "raid_bdev1", 00:13:40.338 "uuid": "be726d4a-e786-4914-b1f9-db3ff528c6a7", 00:13:40.338 "strip_size_kb": 0, 00:13:40.338 "state": "online", 00:13:40.338 "raid_level": "raid1", 00:13:40.338 "superblock": false, 00:13:40.338 "num_base_bdevs": 4, 00:13:40.338 "num_base_bdevs_discovered": 3, 00:13:40.338 "num_base_bdevs_operational": 3, 00:13:40.338 "process": { 00:13:40.338 "type": "rebuild", 00:13:40.338 "target": "spare", 00:13:40.338 "progress": { 00:13:40.338 "blocks": 26624, 00:13:40.338 "percent": 40 00:13:40.338 } 00:13:40.338 }, 00:13:40.338 "base_bdevs_list": [ 00:13:40.338 { 00:13:40.338 "name": "spare", 00:13:40.338 "uuid": "b9e69b22-d3cd-50d3-86b4-1e058cd1aa79", 00:13:40.338 "is_configured": true, 00:13:40.338 "data_offset": 0, 00:13:40.338 "data_size": 65536 00:13:40.338 }, 00:13:40.338 { 00:13:40.338 "name": null, 00:13:40.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.338 "is_configured": false, 00:13:40.338 "data_offset": 0, 00:13:40.338 "data_size": 65536 00:13:40.338 }, 00:13:40.338 { 00:13:40.338 "name": "BaseBdev3", 00:13:40.338 "uuid": "c485c254-cf0f-586b-ac78-4cf3dd47dfca", 00:13:40.338 "is_configured": true, 00:13:40.338 "data_offset": 0, 00:13:40.338 "data_size": 65536 00:13:40.338 }, 00:13:40.338 { 00:13:40.338 "name": "BaseBdev4", 00:13:40.338 "uuid": "b39d2215-5241-599f-8a62-076b3ba0397b", 00:13:40.338 "is_configured": true, 00:13:40.338 "data_offset": 0, 00:13:40.338 "data_size": 65536 00:13:40.338 } 00:13:40.338 ] 00:13:40.338 }' 00:13:40.338 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.598 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.598 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.598 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.598 08:50:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.535 "name": "raid_bdev1", 00:13:41.535 "uuid": "be726d4a-e786-4914-b1f9-db3ff528c6a7", 00:13:41.535 "strip_size_kb": 0, 00:13:41.535 "state": "online", 00:13:41.535 "raid_level": "raid1", 00:13:41.535 "superblock": false, 00:13:41.535 "num_base_bdevs": 4, 00:13:41.535 "num_base_bdevs_discovered": 3, 00:13:41.535 "num_base_bdevs_operational": 3, 00:13:41.535 "process": { 00:13:41.535 "type": "rebuild", 00:13:41.535 "target": "spare", 00:13:41.535 "progress": { 00:13:41.535 "blocks": 49152, 00:13:41.535 "percent": 75 00:13:41.535 } 00:13:41.535 }, 00:13:41.535 "base_bdevs_list": [ 00:13:41.535 { 00:13:41.535 "name": "spare", 00:13:41.535 "uuid": "b9e69b22-d3cd-50d3-86b4-1e058cd1aa79", 00:13:41.535 "is_configured": true, 00:13:41.535 "data_offset": 0, 00:13:41.535 "data_size": 65536 00:13:41.535 }, 00:13:41.535 { 00:13:41.535 "name": null, 00:13:41.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.535 "is_configured": false, 00:13:41.535 "data_offset": 0, 00:13:41.535 "data_size": 65536 00:13:41.535 }, 00:13:41.535 { 00:13:41.535 "name": "BaseBdev3", 00:13:41.535 "uuid": "c485c254-cf0f-586b-ac78-4cf3dd47dfca", 00:13:41.535 "is_configured": true, 00:13:41.535 "data_offset": 0, 00:13:41.535 "data_size": 65536 00:13:41.535 }, 00:13:41.535 { 00:13:41.535 "name": "BaseBdev4", 00:13:41.535 "uuid": "b39d2215-5241-599f-8a62-076b3ba0397b", 00:13:41.535 "is_configured": true, 00:13:41.535 "data_offset": 0, 00:13:41.535 "data_size": 65536 00:13:41.535 } 00:13:41.535 ] 00:13:41.535 }' 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.535 08:50:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.794 08:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.794 08:50:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:42.368 [2024-10-05 08:50:18.591188] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:42.368 [2024-10-05 08:50:18.591312] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:42.368 [2024-10-05 08:50:18.591363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.628 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.628 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.628 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.628 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.628 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.628 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.628 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.628 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.628 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.628 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.628 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.628 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.628 "name": "raid_bdev1", 00:13:42.628 "uuid": "be726d4a-e786-4914-b1f9-db3ff528c6a7", 00:13:42.628 "strip_size_kb": 0, 00:13:42.628 "state": "online", 00:13:42.628 "raid_level": "raid1", 00:13:42.628 "superblock": false, 00:13:42.628 "num_base_bdevs": 4, 00:13:42.628 "num_base_bdevs_discovered": 3, 00:13:42.628 "num_base_bdevs_operational": 3, 00:13:42.628 "base_bdevs_list": [ 00:13:42.628 { 00:13:42.628 "name": "spare", 00:13:42.628 "uuid": "b9e69b22-d3cd-50d3-86b4-1e058cd1aa79", 00:13:42.628 "is_configured": true, 00:13:42.628 "data_offset": 0, 00:13:42.628 "data_size": 65536 00:13:42.628 }, 00:13:42.628 { 00:13:42.628 "name": null, 00:13:42.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.628 "is_configured": false, 00:13:42.628 "data_offset": 0, 00:13:42.628 "data_size": 65536 00:13:42.628 }, 00:13:42.628 { 00:13:42.628 "name": "BaseBdev3", 00:13:42.628 "uuid": "c485c254-cf0f-586b-ac78-4cf3dd47dfca", 00:13:42.628 "is_configured": true, 00:13:42.628 "data_offset": 0, 00:13:42.628 "data_size": 65536 00:13:42.628 }, 00:13:42.628 { 00:13:42.628 "name": "BaseBdev4", 00:13:42.628 "uuid": "b39d2215-5241-599f-8a62-076b3ba0397b", 00:13:42.628 "is_configured": true, 00:13:42.628 "data_offset": 0, 00:13:42.629 "data_size": 65536 00:13:42.629 } 00:13:42.629 ] 00:13:42.629 }' 00:13:42.629 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.889 "name": "raid_bdev1", 00:13:42.889 "uuid": "be726d4a-e786-4914-b1f9-db3ff528c6a7", 00:13:42.889 "strip_size_kb": 0, 00:13:42.889 "state": "online", 00:13:42.889 "raid_level": "raid1", 00:13:42.889 "superblock": false, 00:13:42.889 "num_base_bdevs": 4, 00:13:42.889 "num_base_bdevs_discovered": 3, 00:13:42.889 "num_base_bdevs_operational": 3, 00:13:42.889 "base_bdevs_list": [ 00:13:42.889 { 00:13:42.889 "name": "spare", 00:13:42.889 "uuid": "b9e69b22-d3cd-50d3-86b4-1e058cd1aa79", 00:13:42.889 "is_configured": true, 00:13:42.889 "data_offset": 0, 00:13:42.889 "data_size": 65536 00:13:42.889 }, 00:13:42.889 { 00:13:42.889 "name": null, 00:13:42.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.889 "is_configured": false, 00:13:42.889 "data_offset": 0, 00:13:42.889 "data_size": 65536 00:13:42.889 }, 00:13:42.889 { 00:13:42.889 "name": "BaseBdev3", 00:13:42.889 "uuid": "c485c254-cf0f-586b-ac78-4cf3dd47dfca", 00:13:42.889 "is_configured": true, 00:13:42.889 "data_offset": 0, 00:13:42.889 "data_size": 65536 00:13:42.889 }, 00:13:42.889 { 00:13:42.889 "name": "BaseBdev4", 00:13:42.889 "uuid": "b39d2215-5241-599f-8a62-076b3ba0397b", 00:13:42.889 "is_configured": true, 00:13:42.889 "data_offset": 0, 00:13:42.889 "data_size": 65536 00:13:42.889 } 00:13:42.889 ] 00:13:42.889 }' 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.889 "name": "raid_bdev1", 00:13:42.889 "uuid": "be726d4a-e786-4914-b1f9-db3ff528c6a7", 00:13:42.889 "strip_size_kb": 0, 00:13:42.889 "state": "online", 00:13:42.889 "raid_level": "raid1", 00:13:42.889 "superblock": false, 00:13:42.889 "num_base_bdevs": 4, 00:13:42.889 "num_base_bdevs_discovered": 3, 00:13:42.889 "num_base_bdevs_operational": 3, 00:13:42.889 "base_bdevs_list": [ 00:13:42.889 { 00:13:42.889 "name": "spare", 00:13:42.889 "uuid": "b9e69b22-d3cd-50d3-86b4-1e058cd1aa79", 00:13:42.889 "is_configured": true, 00:13:42.889 "data_offset": 0, 00:13:42.889 "data_size": 65536 00:13:42.889 }, 00:13:42.889 { 00:13:42.889 "name": null, 00:13:42.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.889 "is_configured": false, 00:13:42.889 "data_offset": 0, 00:13:42.889 "data_size": 65536 00:13:42.889 }, 00:13:42.889 { 00:13:42.889 "name": "BaseBdev3", 00:13:42.889 "uuid": "c485c254-cf0f-586b-ac78-4cf3dd47dfca", 00:13:42.889 "is_configured": true, 00:13:42.889 "data_offset": 0, 00:13:42.889 "data_size": 65536 00:13:42.889 }, 00:13:42.889 { 00:13:42.889 "name": "BaseBdev4", 00:13:42.889 "uuid": "b39d2215-5241-599f-8a62-076b3ba0397b", 00:13:42.889 "is_configured": true, 00:13:42.889 "data_offset": 0, 00:13:42.889 "data_size": 65536 00:13:42.889 } 00:13:42.889 ] 00:13:42.889 }' 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.889 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.459 [2024-10-05 08:50:19.652705] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.459 [2024-10-05 08:50:19.652790] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.459 [2024-10-05 08:50:19.652888] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.459 [2024-10-05 08:50:19.653013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.459 [2024-10-05 08:50:19.653077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:43.459 /dev/nbd0 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.459 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.719 1+0 records in 00:13:43.719 1+0 records out 00:13:43.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302514 s, 13.5 MB/s 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.719 08:50:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:43.719 /dev/nbd1 00:13:43.719 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:43.719 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:43.719 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:43.719 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:43.719 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:43.719 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:43.719 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.979 1+0 records in 00:13:43.979 1+0 records out 00:13:43.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598017 s, 6.8 MB/s 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:43.979 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:44.240 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:44.240 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:44.240 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:44.240 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.240 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.240 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:44.240 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:44.240 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.240 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.240 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75256 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75256 ']' 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75256 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75256 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:44.527 killing process with pid 75256 00:13:44.527 Received shutdown signal, test time was about 60.000000 seconds 00:13:44.527 00:13:44.527 Latency(us) 00:13:44.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.527 =================================================================================================================== 00:13:44.527 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75256' 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75256 00:13:44.527 [2024-10-05 08:50:20.817435] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.527 08:50:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75256 00:13:45.097 [2024-10-05 08:50:21.274427] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:46.036 08:50:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:46.036 00:13:46.036 real 0m17.456s 00:13:46.036 user 0m19.061s 00:13:46.036 sys 0m3.220s 00:13:46.036 08:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.036 ************************************ 00:13:46.036 END TEST raid_rebuild_test 00:13:46.036 ************************************ 00:13:46.036 08:50:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.297 08:50:22 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:46.297 08:50:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:46.297 08:50:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:46.297 08:50:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:46.297 ************************************ 00:13:46.297 START TEST raid_rebuild_test_sb 00:13:46.297 ************************************ 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:46.297 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:46.298 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:46.298 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75601 00:13:46.298 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:46.298 08:50:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75601 00:13:46.298 08:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75601 ']' 00:13:46.298 08:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.298 08:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:46.298 08:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.298 08:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:46.298 08:50:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.298 [2024-10-05 08:50:22.629815] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:13:46.298 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:46.298 Zero copy mechanism will not be used. 00:13:46.298 [2024-10-05 08:50:22.629984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75601 ] 00:13:46.557 [2024-10-05 08:50:22.793201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.557 [2024-10-05 08:50:22.989302] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.817 [2024-10-05 08:50:23.167624] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.817 [2024-10-05 08:50:23.167676] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.077 BaseBdev1_malloc 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.077 [2024-10-05 08:50:23.484027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:47.077 [2024-10-05 08:50:23.484090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.077 [2024-10-05 08:50:23.484116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:47.077 [2024-10-05 08:50:23.484129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.077 [2024-10-05 08:50:23.486122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.077 [2024-10-05 08:50:23.486161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.077 BaseBdev1 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.077 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.337 BaseBdev2_malloc 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.337 [2024-10-05 08:50:23.552945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:47.337 [2024-10-05 08:50:23.553006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.337 [2024-10-05 08:50:23.553027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:47.337 [2024-10-05 08:50:23.553037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.337 [2024-10-05 08:50:23.555053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.337 [2024-10-05 08:50:23.555092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:47.337 BaseBdev2 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.337 BaseBdev3_malloc 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.337 [2024-10-05 08:50:23.605297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:47.337 [2024-10-05 08:50:23.605348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.337 [2024-10-05 08:50:23.605370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:47.337 [2024-10-05 08:50:23.605381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.337 [2024-10-05 08:50:23.607312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.337 [2024-10-05 08:50:23.607401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:47.337 BaseBdev3 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.337 BaseBdev4_malloc 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.337 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.337 [2024-10-05 08:50:23.658662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:47.338 [2024-10-05 08:50:23.658752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.338 [2024-10-05 08:50:23.658774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:47.338 [2024-10-05 08:50:23.658784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.338 [2024-10-05 08:50:23.660663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.338 [2024-10-05 08:50:23.660704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:47.338 BaseBdev4 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.338 spare_malloc 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.338 spare_delay 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.338 [2024-10-05 08:50:23.720367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:47.338 [2024-10-05 08:50:23.720419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.338 [2024-10-05 08:50:23.720438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:47.338 [2024-10-05 08:50:23.720448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.338 [2024-10-05 08:50:23.722446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.338 [2024-10-05 08:50:23.722485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:47.338 spare 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.338 [2024-10-05 08:50:23.732401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.338 [2024-10-05 08:50:23.734071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.338 [2024-10-05 08:50:23.734134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.338 [2024-10-05 08:50:23.734186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:47.338 [2024-10-05 08:50:23.734362] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:47.338 [2024-10-05 08:50:23.734375] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:47.338 [2024-10-05 08:50:23.734600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:47.338 [2024-10-05 08:50:23.734750] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:47.338 [2024-10-05 08:50:23.734759] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:47.338 [2024-10-05 08:50:23.734889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.338 "name": "raid_bdev1", 00:13:47.338 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:13:47.338 "strip_size_kb": 0, 00:13:47.338 "state": "online", 00:13:47.338 "raid_level": "raid1", 00:13:47.338 "superblock": true, 00:13:47.338 "num_base_bdevs": 4, 00:13:47.338 "num_base_bdevs_discovered": 4, 00:13:47.338 "num_base_bdevs_operational": 4, 00:13:47.338 "base_bdevs_list": [ 00:13:47.338 { 00:13:47.338 "name": "BaseBdev1", 00:13:47.338 "uuid": "a3f100fd-4d4d-57d0-b764-8e642f7de6db", 00:13:47.338 "is_configured": true, 00:13:47.338 "data_offset": 2048, 00:13:47.338 "data_size": 63488 00:13:47.338 }, 00:13:47.338 { 00:13:47.338 "name": "BaseBdev2", 00:13:47.338 "uuid": "ebbcfcbb-318b-5fd9-814b-431c4ae69b24", 00:13:47.338 "is_configured": true, 00:13:47.338 "data_offset": 2048, 00:13:47.338 "data_size": 63488 00:13:47.338 }, 00:13:47.338 { 00:13:47.338 "name": "BaseBdev3", 00:13:47.338 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:13:47.338 "is_configured": true, 00:13:47.338 "data_offset": 2048, 00:13:47.338 "data_size": 63488 00:13:47.338 }, 00:13:47.338 { 00:13:47.338 "name": "BaseBdev4", 00:13:47.338 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:13:47.338 "is_configured": true, 00:13:47.338 "data_offset": 2048, 00:13:47.338 "data_size": 63488 00:13:47.338 } 00:13:47.338 ] 00:13:47.338 }' 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.338 08:50:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.908 [2024-10-05 08:50:24.235812] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:47.908 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:48.167 [2024-10-05 08:50:24.499170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:48.167 /dev/nbd0 00:13:48.167 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:48.167 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:48.167 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.168 1+0 records in 00:13:48.168 1+0 records out 00:13:48.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059584 s, 6.9 MB/s 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:48.168 08:50:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:54.745 63488+0 records in 00:13:54.745 63488+0 records out 00:13:54.745 32505856 bytes (33 MB, 31 MiB) copied, 5.65118 s, 5.8 MB/s 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:54.745 [2024-10-05 08:50:30.414549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.745 [2024-10-05 08:50:30.468010] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.745 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.746 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.746 08:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.746 08:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.746 08:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.746 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.746 "name": "raid_bdev1", 00:13:54.746 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:13:54.746 "strip_size_kb": 0, 00:13:54.746 "state": "online", 00:13:54.746 "raid_level": "raid1", 00:13:54.746 "superblock": true, 00:13:54.746 "num_base_bdevs": 4, 00:13:54.746 "num_base_bdevs_discovered": 3, 00:13:54.746 "num_base_bdevs_operational": 3, 00:13:54.746 "base_bdevs_list": [ 00:13:54.746 { 00:13:54.746 "name": null, 00:13:54.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.746 "is_configured": false, 00:13:54.746 "data_offset": 0, 00:13:54.746 "data_size": 63488 00:13:54.746 }, 00:13:54.746 { 00:13:54.746 "name": "BaseBdev2", 00:13:54.746 "uuid": "ebbcfcbb-318b-5fd9-814b-431c4ae69b24", 00:13:54.746 "is_configured": true, 00:13:54.746 "data_offset": 2048, 00:13:54.746 "data_size": 63488 00:13:54.746 }, 00:13:54.746 { 00:13:54.746 "name": "BaseBdev3", 00:13:54.746 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:13:54.746 "is_configured": true, 00:13:54.746 "data_offset": 2048, 00:13:54.746 "data_size": 63488 00:13:54.746 }, 00:13:54.746 { 00:13:54.746 "name": "BaseBdev4", 00:13:54.746 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:13:54.746 "is_configured": true, 00:13:54.746 "data_offset": 2048, 00:13:54.746 "data_size": 63488 00:13:54.746 } 00:13:54.746 ] 00:13:54.746 }' 00:13:54.746 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.746 08:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.746 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:54.746 08:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.746 08:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.746 [2024-10-05 08:50:30.931145] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:54.746 [2024-10-05 08:50:30.944309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:54.746 08:50:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.746 08:50:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:54.746 [2024-10-05 08:50:30.946087] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:55.684 08:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.684 08:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.684 08:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.684 08:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.684 08:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.684 08:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.684 08:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.684 08:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.684 08:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.684 08:50:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.684 08:50:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.684 "name": "raid_bdev1", 00:13:55.684 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:13:55.684 "strip_size_kb": 0, 00:13:55.684 "state": "online", 00:13:55.684 "raid_level": "raid1", 00:13:55.684 "superblock": true, 00:13:55.684 "num_base_bdevs": 4, 00:13:55.684 "num_base_bdevs_discovered": 4, 00:13:55.684 "num_base_bdevs_operational": 4, 00:13:55.684 "process": { 00:13:55.684 "type": "rebuild", 00:13:55.684 "target": "spare", 00:13:55.684 "progress": { 00:13:55.684 "blocks": 20480, 00:13:55.684 "percent": 32 00:13:55.684 } 00:13:55.685 }, 00:13:55.685 "base_bdevs_list": [ 00:13:55.685 { 00:13:55.685 "name": "spare", 00:13:55.685 "uuid": "723bf215-bc6d-57df-81c8-29859a87533b", 00:13:55.685 "is_configured": true, 00:13:55.685 "data_offset": 2048, 00:13:55.685 "data_size": 63488 00:13:55.685 }, 00:13:55.685 { 00:13:55.685 "name": "BaseBdev2", 00:13:55.685 "uuid": "ebbcfcbb-318b-5fd9-814b-431c4ae69b24", 00:13:55.685 "is_configured": true, 00:13:55.685 "data_offset": 2048, 00:13:55.685 "data_size": 63488 00:13:55.685 }, 00:13:55.685 { 00:13:55.685 "name": "BaseBdev3", 00:13:55.685 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:13:55.685 "is_configured": true, 00:13:55.685 "data_offset": 2048, 00:13:55.685 "data_size": 63488 00:13:55.685 }, 00:13:55.685 { 00:13:55.685 "name": "BaseBdev4", 00:13:55.685 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:13:55.685 "is_configured": true, 00:13:55.685 "data_offset": 2048, 00:13:55.685 "data_size": 63488 00:13:55.685 } 00:13:55.685 ] 00:13:55.685 }' 00:13:55.685 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.685 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.685 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.685 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.685 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:55.685 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.685 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.685 [2024-10-05 08:50:32.106504] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.685 [2024-10-05 08:50:32.150940] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:55.685 [2024-10-05 08:50:32.151058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.685 [2024-10-05 08:50:32.151098] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:55.685 [2024-10-05 08:50:32.151122] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.945 "name": "raid_bdev1", 00:13:55.945 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:13:55.945 "strip_size_kb": 0, 00:13:55.945 "state": "online", 00:13:55.945 "raid_level": "raid1", 00:13:55.945 "superblock": true, 00:13:55.945 "num_base_bdevs": 4, 00:13:55.945 "num_base_bdevs_discovered": 3, 00:13:55.945 "num_base_bdevs_operational": 3, 00:13:55.945 "base_bdevs_list": [ 00:13:55.945 { 00:13:55.945 "name": null, 00:13:55.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.945 "is_configured": false, 00:13:55.945 "data_offset": 0, 00:13:55.945 "data_size": 63488 00:13:55.945 }, 00:13:55.945 { 00:13:55.945 "name": "BaseBdev2", 00:13:55.945 "uuid": "ebbcfcbb-318b-5fd9-814b-431c4ae69b24", 00:13:55.945 "is_configured": true, 00:13:55.945 "data_offset": 2048, 00:13:55.945 "data_size": 63488 00:13:55.945 }, 00:13:55.945 { 00:13:55.945 "name": "BaseBdev3", 00:13:55.945 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:13:55.945 "is_configured": true, 00:13:55.945 "data_offset": 2048, 00:13:55.945 "data_size": 63488 00:13:55.945 }, 00:13:55.945 { 00:13:55.945 "name": "BaseBdev4", 00:13:55.945 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:13:55.945 "is_configured": true, 00:13:55.945 "data_offset": 2048, 00:13:55.945 "data_size": 63488 00:13:55.945 } 00:13:55.945 ] 00:13:55.945 }' 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.945 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.227 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.227 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.227 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.227 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.227 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.227 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.227 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.227 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.227 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.227 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.227 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.227 "name": "raid_bdev1", 00:13:56.227 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:13:56.227 "strip_size_kb": 0, 00:13:56.227 "state": "online", 00:13:56.227 "raid_level": "raid1", 00:13:56.227 "superblock": true, 00:13:56.227 "num_base_bdevs": 4, 00:13:56.227 "num_base_bdevs_discovered": 3, 00:13:56.227 "num_base_bdevs_operational": 3, 00:13:56.227 "base_bdevs_list": [ 00:13:56.227 { 00:13:56.227 "name": null, 00:13:56.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.227 "is_configured": false, 00:13:56.227 "data_offset": 0, 00:13:56.227 "data_size": 63488 00:13:56.227 }, 00:13:56.227 { 00:13:56.227 "name": "BaseBdev2", 00:13:56.227 "uuid": "ebbcfcbb-318b-5fd9-814b-431c4ae69b24", 00:13:56.227 "is_configured": true, 00:13:56.227 "data_offset": 2048, 00:13:56.227 "data_size": 63488 00:13:56.227 }, 00:13:56.227 { 00:13:56.227 "name": "BaseBdev3", 00:13:56.227 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:13:56.227 "is_configured": true, 00:13:56.227 "data_offset": 2048, 00:13:56.227 "data_size": 63488 00:13:56.227 }, 00:13:56.227 { 00:13:56.227 "name": "BaseBdev4", 00:13:56.227 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:13:56.227 "is_configured": true, 00:13:56.227 "data_offset": 2048, 00:13:56.227 "data_size": 63488 00:13:56.227 } 00:13:56.227 ] 00:13:56.227 }' 00:13:56.227 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.527 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.527 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.527 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.527 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:56.527 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.527 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.527 [2024-10-05 08:50:32.757701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:56.527 [2024-10-05 08:50:32.771434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:56.528 08:50:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.528 08:50:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:56.528 [2024-10-05 08:50:32.773276] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:57.464 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.464 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.464 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.464 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.464 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.464 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.464 08:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.464 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.464 08:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.464 08:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.464 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.464 "name": "raid_bdev1", 00:13:57.464 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:13:57.464 "strip_size_kb": 0, 00:13:57.464 "state": "online", 00:13:57.464 "raid_level": "raid1", 00:13:57.464 "superblock": true, 00:13:57.464 "num_base_bdevs": 4, 00:13:57.464 "num_base_bdevs_discovered": 4, 00:13:57.464 "num_base_bdevs_operational": 4, 00:13:57.464 "process": { 00:13:57.464 "type": "rebuild", 00:13:57.464 "target": "spare", 00:13:57.464 "progress": { 00:13:57.464 "blocks": 20480, 00:13:57.464 "percent": 32 00:13:57.464 } 00:13:57.464 }, 00:13:57.464 "base_bdevs_list": [ 00:13:57.464 { 00:13:57.465 "name": "spare", 00:13:57.465 "uuid": "723bf215-bc6d-57df-81c8-29859a87533b", 00:13:57.465 "is_configured": true, 00:13:57.465 "data_offset": 2048, 00:13:57.465 "data_size": 63488 00:13:57.465 }, 00:13:57.465 { 00:13:57.465 "name": "BaseBdev2", 00:13:57.465 "uuid": "ebbcfcbb-318b-5fd9-814b-431c4ae69b24", 00:13:57.465 "is_configured": true, 00:13:57.465 "data_offset": 2048, 00:13:57.465 "data_size": 63488 00:13:57.465 }, 00:13:57.465 { 00:13:57.465 "name": "BaseBdev3", 00:13:57.465 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:13:57.465 "is_configured": true, 00:13:57.465 "data_offset": 2048, 00:13:57.465 "data_size": 63488 00:13:57.465 }, 00:13:57.465 { 00:13:57.465 "name": "BaseBdev4", 00:13:57.465 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:13:57.465 "is_configured": true, 00:13:57.465 "data_offset": 2048, 00:13:57.465 "data_size": 63488 00:13:57.465 } 00:13:57.465 ] 00:13:57.465 }' 00:13:57.465 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.465 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.465 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.465 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.465 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:57.465 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:57.465 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:57.465 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:57.465 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:57.465 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:57.465 08:50:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:57.465 08:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.465 08:50:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.465 [2024-10-05 08:50:33.933081] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.724 [2024-10-05 08:50:34.077864] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.724 "name": "raid_bdev1", 00:13:57.724 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:13:57.724 "strip_size_kb": 0, 00:13:57.724 "state": "online", 00:13:57.724 "raid_level": "raid1", 00:13:57.724 "superblock": true, 00:13:57.724 "num_base_bdevs": 4, 00:13:57.724 "num_base_bdevs_discovered": 3, 00:13:57.724 "num_base_bdevs_operational": 3, 00:13:57.724 "process": { 00:13:57.724 "type": "rebuild", 00:13:57.724 "target": "spare", 00:13:57.724 "progress": { 00:13:57.724 "blocks": 24576, 00:13:57.724 "percent": 38 00:13:57.724 } 00:13:57.724 }, 00:13:57.724 "base_bdevs_list": [ 00:13:57.724 { 00:13:57.724 "name": "spare", 00:13:57.724 "uuid": "723bf215-bc6d-57df-81c8-29859a87533b", 00:13:57.724 "is_configured": true, 00:13:57.724 "data_offset": 2048, 00:13:57.724 "data_size": 63488 00:13:57.724 }, 00:13:57.724 { 00:13:57.724 "name": null, 00:13:57.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.724 "is_configured": false, 00:13:57.724 "data_offset": 0, 00:13:57.724 "data_size": 63488 00:13:57.724 }, 00:13:57.724 { 00:13:57.724 "name": "BaseBdev3", 00:13:57.724 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:13:57.724 "is_configured": true, 00:13:57.724 "data_offset": 2048, 00:13:57.724 "data_size": 63488 00:13:57.724 }, 00:13:57.724 { 00:13:57.724 "name": "BaseBdev4", 00:13:57.724 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:13:57.724 "is_configured": true, 00:13:57.724 "data_offset": 2048, 00:13:57.724 "data_size": 63488 00:13:57.724 } 00:13:57.724 ] 00:13:57.724 }' 00:13:57.724 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.725 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.725 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.984 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.984 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=466 00:13:57.984 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.985 "name": "raid_bdev1", 00:13:57.985 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:13:57.985 "strip_size_kb": 0, 00:13:57.985 "state": "online", 00:13:57.985 "raid_level": "raid1", 00:13:57.985 "superblock": true, 00:13:57.985 "num_base_bdevs": 4, 00:13:57.985 "num_base_bdevs_discovered": 3, 00:13:57.985 "num_base_bdevs_operational": 3, 00:13:57.985 "process": { 00:13:57.985 "type": "rebuild", 00:13:57.985 "target": "spare", 00:13:57.985 "progress": { 00:13:57.985 "blocks": 26624, 00:13:57.985 "percent": 41 00:13:57.985 } 00:13:57.985 }, 00:13:57.985 "base_bdevs_list": [ 00:13:57.985 { 00:13:57.985 "name": "spare", 00:13:57.985 "uuid": "723bf215-bc6d-57df-81c8-29859a87533b", 00:13:57.985 "is_configured": true, 00:13:57.985 "data_offset": 2048, 00:13:57.985 "data_size": 63488 00:13:57.985 }, 00:13:57.985 { 00:13:57.985 "name": null, 00:13:57.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.985 "is_configured": false, 00:13:57.985 "data_offset": 0, 00:13:57.985 "data_size": 63488 00:13:57.985 }, 00:13:57.985 { 00:13:57.985 "name": "BaseBdev3", 00:13:57.985 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:13:57.985 "is_configured": true, 00:13:57.985 "data_offset": 2048, 00:13:57.985 "data_size": 63488 00:13:57.985 }, 00:13:57.985 { 00:13:57.985 "name": "BaseBdev4", 00:13:57.985 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:13:57.985 "is_configured": true, 00:13:57.985 "data_offset": 2048, 00:13:57.985 "data_size": 63488 00:13:57.985 } 00:13:57.985 ] 00:13:57.985 }' 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.985 08:50:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:58.924 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.924 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.924 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.924 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.924 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.924 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.924 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.924 08:50:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.924 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.924 08:50:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.924 08:50:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.184 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.184 "name": "raid_bdev1", 00:13:59.184 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:13:59.184 "strip_size_kb": 0, 00:13:59.184 "state": "online", 00:13:59.184 "raid_level": "raid1", 00:13:59.184 "superblock": true, 00:13:59.184 "num_base_bdevs": 4, 00:13:59.184 "num_base_bdevs_discovered": 3, 00:13:59.184 "num_base_bdevs_operational": 3, 00:13:59.184 "process": { 00:13:59.184 "type": "rebuild", 00:13:59.184 "target": "spare", 00:13:59.184 "progress": { 00:13:59.184 "blocks": 49152, 00:13:59.184 "percent": 77 00:13:59.184 } 00:13:59.184 }, 00:13:59.184 "base_bdevs_list": [ 00:13:59.184 { 00:13:59.184 "name": "spare", 00:13:59.184 "uuid": "723bf215-bc6d-57df-81c8-29859a87533b", 00:13:59.184 "is_configured": true, 00:13:59.184 "data_offset": 2048, 00:13:59.184 "data_size": 63488 00:13:59.184 }, 00:13:59.184 { 00:13:59.184 "name": null, 00:13:59.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.184 "is_configured": false, 00:13:59.184 "data_offset": 0, 00:13:59.184 "data_size": 63488 00:13:59.184 }, 00:13:59.184 { 00:13:59.184 "name": "BaseBdev3", 00:13:59.184 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:13:59.184 "is_configured": true, 00:13:59.184 "data_offset": 2048, 00:13:59.184 "data_size": 63488 00:13:59.184 }, 00:13:59.184 { 00:13:59.184 "name": "BaseBdev4", 00:13:59.184 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:13:59.184 "is_configured": true, 00:13:59.184 "data_offset": 2048, 00:13:59.184 "data_size": 63488 00:13:59.184 } 00:13:59.184 ] 00:13:59.184 }' 00:13:59.184 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.184 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.184 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.184 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.184 08:50:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:59.753 [2024-10-05 08:50:35.985514] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:59.753 [2024-10-05 08:50:35.985630] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:59.753 [2024-10-05 08:50:35.985753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.322 "name": "raid_bdev1", 00:14:00.322 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:00.322 "strip_size_kb": 0, 00:14:00.322 "state": "online", 00:14:00.322 "raid_level": "raid1", 00:14:00.322 "superblock": true, 00:14:00.322 "num_base_bdevs": 4, 00:14:00.322 "num_base_bdevs_discovered": 3, 00:14:00.322 "num_base_bdevs_operational": 3, 00:14:00.322 "base_bdevs_list": [ 00:14:00.322 { 00:14:00.322 "name": "spare", 00:14:00.322 "uuid": "723bf215-bc6d-57df-81c8-29859a87533b", 00:14:00.322 "is_configured": true, 00:14:00.322 "data_offset": 2048, 00:14:00.322 "data_size": 63488 00:14:00.322 }, 00:14:00.322 { 00:14:00.322 "name": null, 00:14:00.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.322 "is_configured": false, 00:14:00.322 "data_offset": 0, 00:14:00.322 "data_size": 63488 00:14:00.322 }, 00:14:00.322 { 00:14:00.322 "name": "BaseBdev3", 00:14:00.322 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:00.322 "is_configured": true, 00:14:00.322 "data_offset": 2048, 00:14:00.322 "data_size": 63488 00:14:00.322 }, 00:14:00.322 { 00:14:00.322 "name": "BaseBdev4", 00:14:00.322 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:00.322 "is_configured": true, 00:14:00.322 "data_offset": 2048, 00:14:00.322 "data_size": 63488 00:14:00.322 } 00:14:00.322 ] 00:14:00.322 }' 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.322 "name": "raid_bdev1", 00:14:00.322 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:00.322 "strip_size_kb": 0, 00:14:00.322 "state": "online", 00:14:00.322 "raid_level": "raid1", 00:14:00.322 "superblock": true, 00:14:00.322 "num_base_bdevs": 4, 00:14:00.322 "num_base_bdevs_discovered": 3, 00:14:00.322 "num_base_bdevs_operational": 3, 00:14:00.322 "base_bdevs_list": [ 00:14:00.322 { 00:14:00.322 "name": "spare", 00:14:00.322 "uuid": "723bf215-bc6d-57df-81c8-29859a87533b", 00:14:00.322 "is_configured": true, 00:14:00.322 "data_offset": 2048, 00:14:00.322 "data_size": 63488 00:14:00.322 }, 00:14:00.322 { 00:14:00.322 "name": null, 00:14:00.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.322 "is_configured": false, 00:14:00.322 "data_offset": 0, 00:14:00.322 "data_size": 63488 00:14:00.322 }, 00:14:00.322 { 00:14:00.322 "name": "BaseBdev3", 00:14:00.322 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:00.322 "is_configured": true, 00:14:00.322 "data_offset": 2048, 00:14:00.322 "data_size": 63488 00:14:00.322 }, 00:14:00.322 { 00:14:00.322 "name": "BaseBdev4", 00:14:00.322 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:00.322 "is_configured": true, 00:14:00.322 "data_offset": 2048, 00:14:00.322 "data_size": 63488 00:14:00.322 } 00:14:00.322 ] 00:14:00.322 }' 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.322 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.323 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.582 "name": "raid_bdev1", 00:14:00.582 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:00.582 "strip_size_kb": 0, 00:14:00.582 "state": "online", 00:14:00.582 "raid_level": "raid1", 00:14:00.582 "superblock": true, 00:14:00.582 "num_base_bdevs": 4, 00:14:00.582 "num_base_bdevs_discovered": 3, 00:14:00.582 "num_base_bdevs_operational": 3, 00:14:00.582 "base_bdevs_list": [ 00:14:00.582 { 00:14:00.582 "name": "spare", 00:14:00.582 "uuid": "723bf215-bc6d-57df-81c8-29859a87533b", 00:14:00.582 "is_configured": true, 00:14:00.582 "data_offset": 2048, 00:14:00.582 "data_size": 63488 00:14:00.582 }, 00:14:00.582 { 00:14:00.582 "name": null, 00:14:00.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.582 "is_configured": false, 00:14:00.582 "data_offset": 0, 00:14:00.582 "data_size": 63488 00:14:00.582 }, 00:14:00.582 { 00:14:00.582 "name": "BaseBdev3", 00:14:00.582 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:00.582 "is_configured": true, 00:14:00.582 "data_offset": 2048, 00:14:00.582 "data_size": 63488 00:14:00.582 }, 00:14:00.582 { 00:14:00.582 "name": "BaseBdev4", 00:14:00.582 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:00.582 "is_configured": true, 00:14:00.582 "data_offset": 2048, 00:14:00.582 "data_size": 63488 00:14:00.582 } 00:14:00.582 ] 00:14:00.582 }' 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.582 08:50:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.843 [2024-10-05 08:50:37.230800] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.843 [2024-10-05 08:50:37.230875] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.843 [2024-10-05 08:50:37.230982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.843 [2024-10-05 08:50:37.231075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.843 [2024-10-05 08:50:37.231124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:00.843 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:01.103 /dev/nbd0 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.103 1+0 records in 00:14:01.103 1+0 records out 00:14:01.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481324 s, 8.5 MB/s 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.103 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:01.363 /dev/nbd1 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.363 1+0 records in 00:14:01.363 1+0 records out 00:14:01.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412226 s, 9.9 MB/s 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.363 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:01.623 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:01.623 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:01.623 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:01.623 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.623 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:01.623 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.623 08:50:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:01.882 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:01.882 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:01.882 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:01.882 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.882 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.882 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:01.882 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:01.882 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.882 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.882 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:02.141 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.142 [2024-10-05 08:50:38.420042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:02.142 [2024-10-05 08:50:38.420095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.142 [2024-10-05 08:50:38.420117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:02.142 [2024-10-05 08:50:38.420125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.142 [2024-10-05 08:50:38.422303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.142 [2024-10-05 08:50:38.422340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:02.142 [2024-10-05 08:50:38.422425] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:02.142 [2024-10-05 08:50:38.422477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.142 [2024-10-05 08:50:38.422611] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.142 [2024-10-05 08:50:38.422706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:02.142 spare 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.142 [2024-10-05 08:50:38.522595] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:02.142 [2024-10-05 08:50:38.522656] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:02.142 [2024-10-05 08:50:38.522912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:02.142 [2024-10-05 08:50:38.523084] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:02.142 [2024-10-05 08:50:38.523099] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:02.142 [2024-10-05 08:50:38.523243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.142 "name": "raid_bdev1", 00:14:02.142 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:02.142 "strip_size_kb": 0, 00:14:02.142 "state": "online", 00:14:02.142 "raid_level": "raid1", 00:14:02.142 "superblock": true, 00:14:02.142 "num_base_bdevs": 4, 00:14:02.142 "num_base_bdevs_discovered": 3, 00:14:02.142 "num_base_bdevs_operational": 3, 00:14:02.142 "base_bdevs_list": [ 00:14:02.142 { 00:14:02.142 "name": "spare", 00:14:02.142 "uuid": "723bf215-bc6d-57df-81c8-29859a87533b", 00:14:02.142 "is_configured": true, 00:14:02.142 "data_offset": 2048, 00:14:02.142 "data_size": 63488 00:14:02.142 }, 00:14:02.142 { 00:14:02.142 "name": null, 00:14:02.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.142 "is_configured": false, 00:14:02.142 "data_offset": 2048, 00:14:02.142 "data_size": 63488 00:14:02.142 }, 00:14:02.142 { 00:14:02.142 "name": "BaseBdev3", 00:14:02.142 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:02.142 "is_configured": true, 00:14:02.142 "data_offset": 2048, 00:14:02.142 "data_size": 63488 00:14:02.142 }, 00:14:02.142 { 00:14:02.142 "name": "BaseBdev4", 00:14:02.142 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:02.142 "is_configured": true, 00:14:02.142 "data_offset": 2048, 00:14:02.142 "data_size": 63488 00:14:02.142 } 00:14:02.142 ] 00:14:02.142 }' 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.142 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.711 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.711 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.711 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.711 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.711 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.711 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.711 08:50:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.711 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.711 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.711 08:50:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.711 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.711 "name": "raid_bdev1", 00:14:02.711 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:02.711 "strip_size_kb": 0, 00:14:02.711 "state": "online", 00:14:02.711 "raid_level": "raid1", 00:14:02.711 "superblock": true, 00:14:02.711 "num_base_bdevs": 4, 00:14:02.711 "num_base_bdevs_discovered": 3, 00:14:02.711 "num_base_bdevs_operational": 3, 00:14:02.711 "base_bdevs_list": [ 00:14:02.711 { 00:14:02.711 "name": "spare", 00:14:02.711 "uuid": "723bf215-bc6d-57df-81c8-29859a87533b", 00:14:02.711 "is_configured": true, 00:14:02.711 "data_offset": 2048, 00:14:02.711 "data_size": 63488 00:14:02.711 }, 00:14:02.711 { 00:14:02.711 "name": null, 00:14:02.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.711 "is_configured": false, 00:14:02.711 "data_offset": 2048, 00:14:02.712 "data_size": 63488 00:14:02.712 }, 00:14:02.712 { 00:14:02.712 "name": "BaseBdev3", 00:14:02.712 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:02.712 "is_configured": true, 00:14:02.712 "data_offset": 2048, 00:14:02.712 "data_size": 63488 00:14:02.712 }, 00:14:02.712 { 00:14:02.712 "name": "BaseBdev4", 00:14:02.712 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:02.712 "is_configured": true, 00:14:02.712 "data_offset": 2048, 00:14:02.712 "data_size": 63488 00:14:02.712 } 00:14:02.712 ] 00:14:02.712 }' 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.712 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.971 [2024-10-05 08:50:39.182741] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.972 "name": "raid_bdev1", 00:14:02.972 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:02.972 "strip_size_kb": 0, 00:14:02.972 "state": "online", 00:14:02.972 "raid_level": "raid1", 00:14:02.972 "superblock": true, 00:14:02.972 "num_base_bdevs": 4, 00:14:02.972 "num_base_bdevs_discovered": 2, 00:14:02.972 "num_base_bdevs_operational": 2, 00:14:02.972 "base_bdevs_list": [ 00:14:02.972 { 00:14:02.972 "name": null, 00:14:02.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.972 "is_configured": false, 00:14:02.972 "data_offset": 0, 00:14:02.972 "data_size": 63488 00:14:02.972 }, 00:14:02.972 { 00:14:02.972 "name": null, 00:14:02.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.972 "is_configured": false, 00:14:02.972 "data_offset": 2048, 00:14:02.972 "data_size": 63488 00:14:02.972 }, 00:14:02.972 { 00:14:02.972 "name": "BaseBdev3", 00:14:02.972 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:02.972 "is_configured": true, 00:14:02.972 "data_offset": 2048, 00:14:02.972 "data_size": 63488 00:14:02.972 }, 00:14:02.972 { 00:14:02.972 "name": "BaseBdev4", 00:14:02.972 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:02.972 "is_configured": true, 00:14:02.972 "data_offset": 2048, 00:14:02.972 "data_size": 63488 00:14:02.972 } 00:14:02.972 ] 00:14:02.972 }' 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.972 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.231 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.231 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.231 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.231 [2024-10-05 08:50:39.614027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.231 [2024-10-05 08:50:39.614231] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:03.231 [2024-10-05 08:50:39.614296] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:03.231 [2024-10-05 08:50:39.614359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.231 [2024-10-05 08:50:39.627541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:03.231 08:50:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.231 08:50:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:03.231 [2024-10-05 08:50:39.629384] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.169 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.169 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.169 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.169 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.169 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.429 "name": "raid_bdev1", 00:14:04.429 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:04.429 "strip_size_kb": 0, 00:14:04.429 "state": "online", 00:14:04.429 "raid_level": "raid1", 00:14:04.429 "superblock": true, 00:14:04.429 "num_base_bdevs": 4, 00:14:04.429 "num_base_bdevs_discovered": 3, 00:14:04.429 "num_base_bdevs_operational": 3, 00:14:04.429 "process": { 00:14:04.429 "type": "rebuild", 00:14:04.429 "target": "spare", 00:14:04.429 "progress": { 00:14:04.429 "blocks": 20480, 00:14:04.429 "percent": 32 00:14:04.429 } 00:14:04.429 }, 00:14:04.429 "base_bdevs_list": [ 00:14:04.429 { 00:14:04.429 "name": "spare", 00:14:04.429 "uuid": "723bf215-bc6d-57df-81c8-29859a87533b", 00:14:04.429 "is_configured": true, 00:14:04.429 "data_offset": 2048, 00:14:04.429 "data_size": 63488 00:14:04.429 }, 00:14:04.429 { 00:14:04.429 "name": null, 00:14:04.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.429 "is_configured": false, 00:14:04.429 "data_offset": 2048, 00:14:04.429 "data_size": 63488 00:14:04.429 }, 00:14:04.429 { 00:14:04.429 "name": "BaseBdev3", 00:14:04.429 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:04.429 "is_configured": true, 00:14:04.429 "data_offset": 2048, 00:14:04.429 "data_size": 63488 00:14:04.429 }, 00:14:04.429 { 00:14:04.429 "name": "BaseBdev4", 00:14:04.429 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:04.429 "is_configured": true, 00:14:04.429 "data_offset": 2048, 00:14:04.429 "data_size": 63488 00:14:04.429 } 00:14:04.429 ] 00:14:04.429 }' 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.429 [2024-10-05 08:50:40.789855] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.429 [2024-10-05 08:50:40.834227] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:04.429 [2024-10-05 08:50:40.834278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.429 [2024-10-05 08:50:40.834294] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.429 [2024-10-05 08:50:40.834301] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.429 08:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.688 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.688 "name": "raid_bdev1", 00:14:04.688 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:04.688 "strip_size_kb": 0, 00:14:04.688 "state": "online", 00:14:04.688 "raid_level": "raid1", 00:14:04.688 "superblock": true, 00:14:04.688 "num_base_bdevs": 4, 00:14:04.688 "num_base_bdevs_discovered": 2, 00:14:04.688 "num_base_bdevs_operational": 2, 00:14:04.688 "base_bdevs_list": [ 00:14:04.688 { 00:14:04.688 "name": null, 00:14:04.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.688 "is_configured": false, 00:14:04.688 "data_offset": 0, 00:14:04.688 "data_size": 63488 00:14:04.689 }, 00:14:04.689 { 00:14:04.689 "name": null, 00:14:04.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.689 "is_configured": false, 00:14:04.689 "data_offset": 2048, 00:14:04.689 "data_size": 63488 00:14:04.689 }, 00:14:04.689 { 00:14:04.689 "name": "BaseBdev3", 00:14:04.689 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:04.689 "is_configured": true, 00:14:04.689 "data_offset": 2048, 00:14:04.689 "data_size": 63488 00:14:04.689 }, 00:14:04.689 { 00:14:04.689 "name": "BaseBdev4", 00:14:04.689 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:04.689 "is_configured": true, 00:14:04.689 "data_offset": 2048, 00:14:04.689 "data_size": 63488 00:14:04.689 } 00:14:04.689 ] 00:14:04.689 }' 00:14:04.689 08:50:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.689 08:50:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.949 08:50:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:04.949 08:50:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.949 08:50:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.949 [2024-10-05 08:50:41.248595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:04.949 [2024-10-05 08:50:41.248699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.949 [2024-10-05 08:50:41.248743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:04.949 [2024-10-05 08:50:41.248772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.949 [2024-10-05 08:50:41.249283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.949 [2024-10-05 08:50:41.249344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:04.949 [2024-10-05 08:50:41.249459] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:04.949 [2024-10-05 08:50:41.249499] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:04.949 [2024-10-05 08:50:41.249541] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:04.949 [2024-10-05 08:50:41.249596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.949 [2024-10-05 08:50:41.263389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:04.949 spare 00:14:04.949 08:50:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.949 [2024-10-05 08:50:41.265292] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.949 08:50:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:05.888 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.888 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.888 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.888 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.888 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.889 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.889 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.889 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.889 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.889 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.889 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.889 "name": "raid_bdev1", 00:14:05.889 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:05.889 "strip_size_kb": 0, 00:14:05.889 "state": "online", 00:14:05.889 "raid_level": "raid1", 00:14:05.889 "superblock": true, 00:14:05.889 "num_base_bdevs": 4, 00:14:05.889 "num_base_bdevs_discovered": 3, 00:14:05.889 "num_base_bdevs_operational": 3, 00:14:05.889 "process": { 00:14:05.889 "type": "rebuild", 00:14:05.889 "target": "spare", 00:14:05.889 "progress": { 00:14:05.889 "blocks": 20480, 00:14:05.889 "percent": 32 00:14:05.889 } 00:14:05.889 }, 00:14:05.889 "base_bdevs_list": [ 00:14:05.889 { 00:14:05.889 "name": "spare", 00:14:05.889 "uuid": "723bf215-bc6d-57df-81c8-29859a87533b", 00:14:05.889 "is_configured": true, 00:14:05.889 "data_offset": 2048, 00:14:05.889 "data_size": 63488 00:14:05.889 }, 00:14:05.889 { 00:14:05.889 "name": null, 00:14:05.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.889 "is_configured": false, 00:14:05.889 "data_offset": 2048, 00:14:05.889 "data_size": 63488 00:14:05.889 }, 00:14:05.889 { 00:14:05.889 "name": "BaseBdev3", 00:14:05.889 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:05.889 "is_configured": true, 00:14:05.889 "data_offset": 2048, 00:14:05.889 "data_size": 63488 00:14:05.889 }, 00:14:05.889 { 00:14:05.889 "name": "BaseBdev4", 00:14:05.889 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:05.889 "is_configured": true, 00:14:05.889 "data_offset": 2048, 00:14:05.889 "data_size": 63488 00:14:05.889 } 00:14:05.889 ] 00:14:05.889 }' 00:14:05.889 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.149 [2024-10-05 08:50:42.409180] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.149 [2024-10-05 08:50:42.470093] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:06.149 [2024-10-05 08:50:42.470154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.149 [2024-10-05 08:50:42.470184] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:06.149 [2024-10-05 08:50:42.470193] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.149 "name": "raid_bdev1", 00:14:06.149 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:06.149 "strip_size_kb": 0, 00:14:06.149 "state": "online", 00:14:06.149 "raid_level": "raid1", 00:14:06.149 "superblock": true, 00:14:06.149 "num_base_bdevs": 4, 00:14:06.149 "num_base_bdevs_discovered": 2, 00:14:06.149 "num_base_bdevs_operational": 2, 00:14:06.149 "base_bdevs_list": [ 00:14:06.149 { 00:14:06.149 "name": null, 00:14:06.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.149 "is_configured": false, 00:14:06.149 "data_offset": 0, 00:14:06.149 "data_size": 63488 00:14:06.149 }, 00:14:06.149 { 00:14:06.149 "name": null, 00:14:06.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.149 "is_configured": false, 00:14:06.149 "data_offset": 2048, 00:14:06.149 "data_size": 63488 00:14:06.149 }, 00:14:06.149 { 00:14:06.149 "name": "BaseBdev3", 00:14:06.149 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:06.149 "is_configured": true, 00:14:06.149 "data_offset": 2048, 00:14:06.149 "data_size": 63488 00:14:06.149 }, 00:14:06.149 { 00:14:06.149 "name": "BaseBdev4", 00:14:06.149 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:06.149 "is_configured": true, 00:14:06.149 "data_offset": 2048, 00:14:06.149 "data_size": 63488 00:14:06.149 } 00:14:06.149 ] 00:14:06.149 }' 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.149 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.719 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.719 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.719 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.719 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.719 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.719 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.719 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.719 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.719 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.719 08:50:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.719 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.719 "name": "raid_bdev1", 00:14:06.720 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:06.720 "strip_size_kb": 0, 00:14:06.720 "state": "online", 00:14:06.720 "raid_level": "raid1", 00:14:06.720 "superblock": true, 00:14:06.720 "num_base_bdevs": 4, 00:14:06.720 "num_base_bdevs_discovered": 2, 00:14:06.720 "num_base_bdevs_operational": 2, 00:14:06.720 "base_bdevs_list": [ 00:14:06.720 { 00:14:06.720 "name": null, 00:14:06.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.720 "is_configured": false, 00:14:06.720 "data_offset": 0, 00:14:06.720 "data_size": 63488 00:14:06.720 }, 00:14:06.720 { 00:14:06.720 "name": null, 00:14:06.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.720 "is_configured": false, 00:14:06.720 "data_offset": 2048, 00:14:06.720 "data_size": 63488 00:14:06.720 }, 00:14:06.720 { 00:14:06.720 "name": "BaseBdev3", 00:14:06.720 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:06.720 "is_configured": true, 00:14:06.720 "data_offset": 2048, 00:14:06.720 "data_size": 63488 00:14:06.720 }, 00:14:06.720 { 00:14:06.720 "name": "BaseBdev4", 00:14:06.720 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:06.720 "is_configured": true, 00:14:06.720 "data_offset": 2048, 00:14:06.720 "data_size": 63488 00:14:06.720 } 00:14:06.720 ] 00:14:06.720 }' 00:14:06.720 08:50:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.720 08:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.720 08:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.720 08:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.720 08:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:06.720 08:50:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.720 08:50:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.720 08:50:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.720 08:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:06.720 08:50:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.720 08:50:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.720 [2024-10-05 08:50:43.096300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:06.720 [2024-10-05 08:50:43.096404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.720 [2024-10-05 08:50:43.096427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:06.720 [2024-10-05 08:50:43.096438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.720 [2024-10-05 08:50:43.096855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.720 [2024-10-05 08:50:43.096876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.720 [2024-10-05 08:50:43.096986] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:06.720 [2024-10-05 08:50:43.097003] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:06.720 [2024-10-05 08:50:43.097011] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:06.720 [2024-10-05 08:50:43.097024] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:06.720 BaseBdev1 00:14:06.720 08:50:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.720 08:50:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.657 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.915 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.915 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.915 "name": "raid_bdev1", 00:14:07.915 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:07.915 "strip_size_kb": 0, 00:14:07.915 "state": "online", 00:14:07.915 "raid_level": "raid1", 00:14:07.915 "superblock": true, 00:14:07.915 "num_base_bdevs": 4, 00:14:07.915 "num_base_bdevs_discovered": 2, 00:14:07.915 "num_base_bdevs_operational": 2, 00:14:07.915 "base_bdevs_list": [ 00:14:07.915 { 00:14:07.915 "name": null, 00:14:07.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.915 "is_configured": false, 00:14:07.915 "data_offset": 0, 00:14:07.915 "data_size": 63488 00:14:07.915 }, 00:14:07.915 { 00:14:07.915 "name": null, 00:14:07.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.915 "is_configured": false, 00:14:07.915 "data_offset": 2048, 00:14:07.915 "data_size": 63488 00:14:07.915 }, 00:14:07.915 { 00:14:07.915 "name": "BaseBdev3", 00:14:07.915 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:07.915 "is_configured": true, 00:14:07.915 "data_offset": 2048, 00:14:07.915 "data_size": 63488 00:14:07.915 }, 00:14:07.915 { 00:14:07.915 "name": "BaseBdev4", 00:14:07.915 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:07.915 "is_configured": true, 00:14:07.916 "data_offset": 2048, 00:14:07.916 "data_size": 63488 00:14:07.916 } 00:14:07.916 ] 00:14:07.916 }' 00:14:07.916 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.916 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.175 "name": "raid_bdev1", 00:14:08.175 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:08.175 "strip_size_kb": 0, 00:14:08.175 "state": "online", 00:14:08.175 "raid_level": "raid1", 00:14:08.175 "superblock": true, 00:14:08.175 "num_base_bdevs": 4, 00:14:08.175 "num_base_bdevs_discovered": 2, 00:14:08.175 "num_base_bdevs_operational": 2, 00:14:08.175 "base_bdevs_list": [ 00:14:08.175 { 00:14:08.175 "name": null, 00:14:08.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.175 "is_configured": false, 00:14:08.175 "data_offset": 0, 00:14:08.175 "data_size": 63488 00:14:08.175 }, 00:14:08.175 { 00:14:08.175 "name": null, 00:14:08.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.175 "is_configured": false, 00:14:08.175 "data_offset": 2048, 00:14:08.175 "data_size": 63488 00:14:08.175 }, 00:14:08.175 { 00:14:08.175 "name": "BaseBdev3", 00:14:08.175 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:08.175 "is_configured": true, 00:14:08.175 "data_offset": 2048, 00:14:08.175 "data_size": 63488 00:14:08.175 }, 00:14:08.175 { 00:14:08.175 "name": "BaseBdev4", 00:14:08.175 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:08.175 "is_configured": true, 00:14:08.175 "data_offset": 2048, 00:14:08.175 "data_size": 63488 00:14:08.175 } 00:14:08.175 ] 00:14:08.175 }' 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.175 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.435 [2024-10-05 08:50:44.705513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.435 [2024-10-05 08:50:44.705761] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:08.435 [2024-10-05 08:50:44.705824] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:08.435 request: 00:14:08.435 { 00:14:08.435 "base_bdev": "BaseBdev1", 00:14:08.435 "raid_bdev": "raid_bdev1", 00:14:08.435 "method": "bdev_raid_add_base_bdev", 00:14:08.435 "req_id": 1 00:14:08.435 } 00:14:08.435 Got JSON-RPC error response 00:14:08.435 response: 00:14:08.435 { 00:14:08.435 "code": -22, 00:14:08.435 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:08.435 } 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:08.435 08:50:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.407 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.407 "name": "raid_bdev1", 00:14:09.407 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:09.407 "strip_size_kb": 0, 00:14:09.407 "state": "online", 00:14:09.407 "raid_level": "raid1", 00:14:09.407 "superblock": true, 00:14:09.407 "num_base_bdevs": 4, 00:14:09.407 "num_base_bdevs_discovered": 2, 00:14:09.407 "num_base_bdevs_operational": 2, 00:14:09.407 "base_bdevs_list": [ 00:14:09.407 { 00:14:09.407 "name": null, 00:14:09.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.407 "is_configured": false, 00:14:09.407 "data_offset": 0, 00:14:09.407 "data_size": 63488 00:14:09.407 }, 00:14:09.407 { 00:14:09.407 "name": null, 00:14:09.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.407 "is_configured": false, 00:14:09.407 "data_offset": 2048, 00:14:09.407 "data_size": 63488 00:14:09.407 }, 00:14:09.407 { 00:14:09.407 "name": "BaseBdev3", 00:14:09.407 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:09.407 "is_configured": true, 00:14:09.407 "data_offset": 2048, 00:14:09.407 "data_size": 63488 00:14:09.407 }, 00:14:09.407 { 00:14:09.407 "name": "BaseBdev4", 00:14:09.408 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:09.408 "is_configured": true, 00:14:09.408 "data_offset": 2048, 00:14:09.408 "data_size": 63488 00:14:09.408 } 00:14:09.408 ] 00:14:09.408 }' 00:14:09.408 08:50:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.408 08:50:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.006 "name": "raid_bdev1", 00:14:10.006 "uuid": "a162d9d5-ae76-45ed-959c-d787981369f9", 00:14:10.006 "strip_size_kb": 0, 00:14:10.006 "state": "online", 00:14:10.006 "raid_level": "raid1", 00:14:10.006 "superblock": true, 00:14:10.006 "num_base_bdevs": 4, 00:14:10.006 "num_base_bdevs_discovered": 2, 00:14:10.006 "num_base_bdevs_operational": 2, 00:14:10.006 "base_bdevs_list": [ 00:14:10.006 { 00:14:10.006 "name": null, 00:14:10.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.006 "is_configured": false, 00:14:10.006 "data_offset": 0, 00:14:10.006 "data_size": 63488 00:14:10.006 }, 00:14:10.006 { 00:14:10.006 "name": null, 00:14:10.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.006 "is_configured": false, 00:14:10.006 "data_offset": 2048, 00:14:10.006 "data_size": 63488 00:14:10.006 }, 00:14:10.006 { 00:14:10.006 "name": "BaseBdev3", 00:14:10.006 "uuid": "fb9f5596-6b6c-51c6-b761-e40ab2dc6810", 00:14:10.006 "is_configured": true, 00:14:10.006 "data_offset": 2048, 00:14:10.006 "data_size": 63488 00:14:10.006 }, 00:14:10.006 { 00:14:10.006 "name": "BaseBdev4", 00:14:10.006 "uuid": "c09957d2-8cc4-5224-97ae-6799ae14978c", 00:14:10.006 "is_configured": true, 00:14:10.006 "data_offset": 2048, 00:14:10.006 "data_size": 63488 00:14:10.006 } 00:14:10.006 ] 00:14:10.006 }' 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75601 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75601 ']' 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 75601 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75601 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:10.006 killing process with pid 75601 00:14:10.006 Received shutdown signal, test time was about 60.000000 seconds 00:14:10.006 00:14:10.006 Latency(us) 00:14:10.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.006 =================================================================================================================== 00:14:10.006 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75601' 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 75601 00:14:10.006 [2024-10-05 08:50:46.388243] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:10.006 [2024-10-05 08:50:46.388354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.006 [2024-10-05 08:50:46.388417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.006 [2024-10-05 08:50:46.388427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:10.006 08:50:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 75601 00:14:10.576 [2024-10-05 08:50:46.849081] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:11.957 00:14:11.957 real 0m25.502s 00:14:11.957 user 0m30.163s 00:14:11.957 sys 0m4.315s 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.957 ************************************ 00:14:11.957 END TEST raid_rebuild_test_sb 00:14:11.957 ************************************ 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.957 08:50:48 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:11.957 08:50:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:11.957 08:50:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.957 08:50:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:11.957 ************************************ 00:14:11.957 START TEST raid_rebuild_test_io 00:14:11.957 ************************************ 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:11.957 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76211 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76211 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76211 ']' 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.958 08:50:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.958 [2024-10-05 08:50:48.229865] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:14:11.958 [2024-10-05 08:50:48.230087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:11.958 Zero copy mechanism will not be used. 00:14:11.958 -allocations --file-prefix=spdk_pid76211 ] 00:14:11.958 [2024-10-05 08:50:48.405529] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.217 [2024-10-05 08:50:48.588816] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.476 [2024-10-05 08:50:48.763385] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.476 [2024-10-05 08:50:48.763439] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.735 BaseBdev1_malloc 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.735 [2024-10-05 08:50:49.082516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:12.735 [2024-10-05 08:50:49.082587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.735 [2024-10-05 08:50:49.082608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:12.735 [2024-10-05 08:50:49.082622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.735 [2024-10-05 08:50:49.084577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.735 [2024-10-05 08:50:49.084701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:12.735 BaseBdev1 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.735 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.735 BaseBdev2_malloc 00:14:12.736 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.736 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:12.736 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.736 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.736 [2024-10-05 08:50:49.169769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:12.736 [2024-10-05 08:50:49.169900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.736 [2024-10-05 08:50:49.169922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:12.736 [2024-10-05 08:50:49.169935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.736 [2024-10-05 08:50:49.171828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.736 [2024-10-05 08:50:49.171867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:12.736 BaseBdev2 00:14:12.736 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.736 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.736 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:12.736 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.736 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.996 BaseBdev3_malloc 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.996 [2024-10-05 08:50:49.222577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:12.996 [2024-10-05 08:50:49.222633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.996 [2024-10-05 08:50:49.222651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:12.996 [2024-10-05 08:50:49.222661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.996 [2024-10-05 08:50:49.224536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.996 [2024-10-05 08:50:49.224577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:12.996 BaseBdev3 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.996 BaseBdev4_malloc 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.996 [2024-10-05 08:50:49.274089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:12.996 [2024-10-05 08:50:49.274143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.996 [2024-10-05 08:50:49.274163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:12.996 [2024-10-05 08:50:49.274174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.996 [2024-10-05 08:50:49.276093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.996 [2024-10-05 08:50:49.276132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:12.996 BaseBdev4 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.996 spare_malloc 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.996 spare_delay 00:14:12.996 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.997 [2024-10-05 08:50:49.338182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.997 [2024-10-05 08:50:49.338239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.997 [2024-10-05 08:50:49.338258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:12.997 [2024-10-05 08:50:49.338268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.997 [2024-10-05 08:50:49.340157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.997 [2024-10-05 08:50:49.340251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.997 spare 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.997 [2024-10-05 08:50:49.350214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.997 [2024-10-05 08:50:49.351863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.997 [2024-10-05 08:50:49.351928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.997 [2024-10-05 08:50:49.351989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.997 [2024-10-05 08:50:49.352065] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:12.997 [2024-10-05 08:50:49.352076] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:12.997 [2024-10-05 08:50:49.352303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:12.997 [2024-10-05 08:50:49.352449] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:12.997 [2024-10-05 08:50:49.352460] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:12.997 [2024-10-05 08:50:49.352594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.997 "name": "raid_bdev1", 00:14:12.997 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:12.997 "strip_size_kb": 0, 00:14:12.997 "state": "online", 00:14:12.997 "raid_level": "raid1", 00:14:12.997 "superblock": false, 00:14:12.997 "num_base_bdevs": 4, 00:14:12.997 "num_base_bdevs_discovered": 4, 00:14:12.997 "num_base_bdevs_operational": 4, 00:14:12.997 "base_bdevs_list": [ 00:14:12.997 { 00:14:12.997 "name": "BaseBdev1", 00:14:12.997 "uuid": "677686d9-d399-5ffe-82a7-1564a9fa749d", 00:14:12.997 "is_configured": true, 00:14:12.997 "data_offset": 0, 00:14:12.997 "data_size": 65536 00:14:12.997 }, 00:14:12.997 { 00:14:12.997 "name": "BaseBdev2", 00:14:12.997 "uuid": "a21aefcb-dab6-5225-89bb-b2f19410fe8e", 00:14:12.997 "is_configured": true, 00:14:12.997 "data_offset": 0, 00:14:12.997 "data_size": 65536 00:14:12.997 }, 00:14:12.997 { 00:14:12.997 "name": "BaseBdev3", 00:14:12.997 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:12.997 "is_configured": true, 00:14:12.997 "data_offset": 0, 00:14:12.997 "data_size": 65536 00:14:12.997 }, 00:14:12.997 { 00:14:12.997 "name": "BaseBdev4", 00:14:12.997 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:12.997 "is_configured": true, 00:14:12.997 "data_offset": 0, 00:14:12.997 "data_size": 65536 00:14:12.997 } 00:14:12.997 ] 00:14:12.997 }' 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.997 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.567 [2024-10-05 08:50:49.805684] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.567 [2024-10-05 08:50:49.905200] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.567 "name": "raid_bdev1", 00:14:13.567 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:13.567 "strip_size_kb": 0, 00:14:13.567 "state": "online", 00:14:13.567 "raid_level": "raid1", 00:14:13.567 "superblock": false, 00:14:13.567 "num_base_bdevs": 4, 00:14:13.567 "num_base_bdevs_discovered": 3, 00:14:13.567 "num_base_bdevs_operational": 3, 00:14:13.567 "base_bdevs_list": [ 00:14:13.567 { 00:14:13.567 "name": null, 00:14:13.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.567 "is_configured": false, 00:14:13.567 "data_offset": 0, 00:14:13.567 "data_size": 65536 00:14:13.567 }, 00:14:13.567 { 00:14:13.567 "name": "BaseBdev2", 00:14:13.567 "uuid": "a21aefcb-dab6-5225-89bb-b2f19410fe8e", 00:14:13.567 "is_configured": true, 00:14:13.567 "data_offset": 0, 00:14:13.567 "data_size": 65536 00:14:13.567 }, 00:14:13.567 { 00:14:13.567 "name": "BaseBdev3", 00:14:13.567 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:13.567 "is_configured": true, 00:14:13.567 "data_offset": 0, 00:14:13.567 "data_size": 65536 00:14:13.567 }, 00:14:13.567 { 00:14:13.567 "name": "BaseBdev4", 00:14:13.567 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:13.567 "is_configured": true, 00:14:13.567 "data_offset": 0, 00:14:13.567 "data_size": 65536 00:14:13.567 } 00:14:13.567 ] 00:14:13.567 }' 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.567 08:50:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.567 [2024-10-05 08:50:50.004507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:13.567 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:13.567 Zero copy mechanism will not be used. 00:14:13.567 Running I/O for 60 seconds... 00:14:14.137 08:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.137 08:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.137 08:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.137 [2024-10-05 08:50:50.373201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.137 08:50:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.137 08:50:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:14.137 [2024-10-05 08:50:50.425780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:14.137 [2024-10-05 08:50:50.427697] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.137 [2024-10-05 08:50:50.553714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:14.137 [2024-10-05 08:50:50.555293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:14.397 [2024-10-05 08:50:50.771439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:14.397 [2024-10-05 08:50:50.772217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:14.657 154.00 IOPS, 462.00 MiB/s [2024-10-05 08:50:51.126309] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:14.917 [2024-10-05 08:50:51.335403] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:14.917 [2024-10-05 08:50:51.335745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:15.176 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.176 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.176 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.176 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.176 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.176 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.176 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.176 08:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.176 08:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.176 08:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.176 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.176 "name": "raid_bdev1", 00:14:15.176 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:15.176 "strip_size_kb": 0, 00:14:15.176 "state": "online", 00:14:15.176 "raid_level": "raid1", 00:14:15.176 "superblock": false, 00:14:15.176 "num_base_bdevs": 4, 00:14:15.176 "num_base_bdevs_discovered": 4, 00:14:15.176 "num_base_bdevs_operational": 4, 00:14:15.176 "process": { 00:14:15.176 "type": "rebuild", 00:14:15.176 "target": "spare", 00:14:15.176 "progress": { 00:14:15.176 "blocks": 10240, 00:14:15.176 "percent": 15 00:14:15.176 } 00:14:15.176 }, 00:14:15.176 "base_bdevs_list": [ 00:14:15.176 { 00:14:15.176 "name": "spare", 00:14:15.176 "uuid": "25b11bb3-14eb-5641-8587-63dac5634cd0", 00:14:15.176 "is_configured": true, 00:14:15.176 "data_offset": 0, 00:14:15.176 "data_size": 65536 00:14:15.176 }, 00:14:15.176 { 00:14:15.176 "name": "BaseBdev2", 00:14:15.176 "uuid": "a21aefcb-dab6-5225-89bb-b2f19410fe8e", 00:14:15.176 "is_configured": true, 00:14:15.176 "data_offset": 0, 00:14:15.176 "data_size": 65536 00:14:15.177 }, 00:14:15.177 { 00:14:15.177 "name": "BaseBdev3", 00:14:15.177 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:15.177 "is_configured": true, 00:14:15.177 "data_offset": 0, 00:14:15.177 "data_size": 65536 00:14:15.177 }, 00:14:15.177 { 00:14:15.177 "name": "BaseBdev4", 00:14:15.177 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:15.177 "is_configured": true, 00:14:15.177 "data_offset": 0, 00:14:15.177 "data_size": 65536 00:14:15.177 } 00:14:15.177 ] 00:14:15.177 }' 00:14:15.177 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.177 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.177 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.177 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.177 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:15.177 08:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.177 08:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.177 [2024-10-05 08:50:51.573061] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.436 [2024-10-05 08:50:51.669791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:15.436 [2024-10-05 08:50:51.771929] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:15.436 [2024-10-05 08:50:51.781837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.436 [2024-10-05 08:50:51.781877] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.437 [2024-10-05 08:50:51.781894] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:15.437 [2024-10-05 08:50:51.805023] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.437 "name": "raid_bdev1", 00:14:15.437 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:15.437 "strip_size_kb": 0, 00:14:15.437 "state": "online", 00:14:15.437 "raid_level": "raid1", 00:14:15.437 "superblock": false, 00:14:15.437 "num_base_bdevs": 4, 00:14:15.437 "num_base_bdevs_discovered": 3, 00:14:15.437 "num_base_bdevs_operational": 3, 00:14:15.437 "base_bdevs_list": [ 00:14:15.437 { 00:14:15.437 "name": null, 00:14:15.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.437 "is_configured": false, 00:14:15.437 "data_offset": 0, 00:14:15.437 "data_size": 65536 00:14:15.437 }, 00:14:15.437 { 00:14:15.437 "name": "BaseBdev2", 00:14:15.437 "uuid": "a21aefcb-dab6-5225-89bb-b2f19410fe8e", 00:14:15.437 "is_configured": true, 00:14:15.437 "data_offset": 0, 00:14:15.437 "data_size": 65536 00:14:15.437 }, 00:14:15.437 { 00:14:15.437 "name": "BaseBdev3", 00:14:15.437 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:15.437 "is_configured": true, 00:14:15.437 "data_offset": 0, 00:14:15.437 "data_size": 65536 00:14:15.437 }, 00:14:15.437 { 00:14:15.437 "name": "BaseBdev4", 00:14:15.437 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:15.437 "is_configured": true, 00:14:15.437 "data_offset": 0, 00:14:15.437 "data_size": 65536 00:14:15.437 } 00:14:15.437 ] 00:14:15.437 }' 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.437 08:50:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.956 134.00 IOPS, 402.00 MiB/s 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.956 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.956 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.956 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.956 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.956 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.957 "name": "raid_bdev1", 00:14:15.957 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:15.957 "strip_size_kb": 0, 00:14:15.957 "state": "online", 00:14:15.957 "raid_level": "raid1", 00:14:15.957 "superblock": false, 00:14:15.957 "num_base_bdevs": 4, 00:14:15.957 "num_base_bdevs_discovered": 3, 00:14:15.957 "num_base_bdevs_operational": 3, 00:14:15.957 "base_bdevs_list": [ 00:14:15.957 { 00:14:15.957 "name": null, 00:14:15.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.957 "is_configured": false, 00:14:15.957 "data_offset": 0, 00:14:15.957 "data_size": 65536 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "name": "BaseBdev2", 00:14:15.957 "uuid": "a21aefcb-dab6-5225-89bb-b2f19410fe8e", 00:14:15.957 "is_configured": true, 00:14:15.957 "data_offset": 0, 00:14:15.957 "data_size": 65536 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "name": "BaseBdev3", 00:14:15.957 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:15.957 "is_configured": true, 00:14:15.957 "data_offset": 0, 00:14:15.957 "data_size": 65536 00:14:15.957 }, 00:14:15.957 { 00:14:15.957 "name": "BaseBdev4", 00:14:15.957 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:15.957 "is_configured": true, 00:14:15.957 "data_offset": 0, 00:14:15.957 "data_size": 65536 00:14:15.957 } 00:14:15.957 ] 00:14:15.957 }' 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.957 [2024-10-05 08:50:52.380426] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.957 08:50:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:16.217 [2024-10-05 08:50:52.439215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:16.217 [2024-10-05 08:50:52.441118] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.217 [2024-10-05 08:50:52.561049] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:16.217 [2024-10-05 08:50:52.562402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:16.476 [2024-10-05 08:50:52.791820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:16.476 [2024-10-05 08:50:52.792671] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:16.735 149.00 IOPS, 447.00 MiB/s [2024-10-05 08:50:53.141123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:16.995 [2024-10-05 08:50:53.250498] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:16.995 [2024-10-05 08:50:53.250888] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:16.995 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.995 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.995 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.995 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.995 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.995 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.995 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.995 08:50:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.995 08:50:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.995 08:50:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.254 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.254 "name": "raid_bdev1", 00:14:17.254 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:17.254 "strip_size_kb": 0, 00:14:17.254 "state": "online", 00:14:17.254 "raid_level": "raid1", 00:14:17.254 "superblock": false, 00:14:17.255 "num_base_bdevs": 4, 00:14:17.255 "num_base_bdevs_discovered": 4, 00:14:17.255 "num_base_bdevs_operational": 4, 00:14:17.255 "process": { 00:14:17.255 "type": "rebuild", 00:14:17.255 "target": "spare", 00:14:17.255 "progress": { 00:14:17.255 "blocks": 10240, 00:14:17.255 "percent": 15 00:14:17.255 } 00:14:17.255 }, 00:14:17.255 "base_bdevs_list": [ 00:14:17.255 { 00:14:17.255 "name": "spare", 00:14:17.255 "uuid": "25b11bb3-14eb-5641-8587-63dac5634cd0", 00:14:17.255 "is_configured": true, 00:14:17.255 "data_offset": 0, 00:14:17.255 "data_size": 65536 00:14:17.255 }, 00:14:17.255 { 00:14:17.255 "name": "BaseBdev2", 00:14:17.255 "uuid": "a21aefcb-dab6-5225-89bb-b2f19410fe8e", 00:14:17.255 "is_configured": true, 00:14:17.255 "data_offset": 0, 00:14:17.255 "data_size": 65536 00:14:17.255 }, 00:14:17.255 { 00:14:17.255 "name": "BaseBdev3", 00:14:17.255 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:17.255 "is_configured": true, 00:14:17.255 "data_offset": 0, 00:14:17.255 "data_size": 65536 00:14:17.255 }, 00:14:17.255 { 00:14:17.255 "name": "BaseBdev4", 00:14:17.255 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:17.255 "is_configured": true, 00:14:17.255 "data_offset": 0, 00:14:17.255 "data_size": 65536 00:14:17.255 } 00:14:17.255 ] 00:14:17.255 }' 00:14:17.255 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.255 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.255 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.255 [2024-10-05 08:50:53.579099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:17.255 [2024-10-05 08:50:53.579601] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:17.255 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.255 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:17.255 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:17.255 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:17.255 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:17.255 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:17.255 08:50:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.255 08:50:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.255 [2024-10-05 08:50:53.596623] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:17.514 [2024-10-05 08:50:53.803223] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:17.514 [2024-10-05 08:50:53.803980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:17.514 [2024-10-05 08:50:53.911229] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:17.514 [2024-10-05 08:50:53.911263] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.514 "name": "raid_bdev1", 00:14:17.514 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:17.514 "strip_size_kb": 0, 00:14:17.514 "state": "online", 00:14:17.514 "raid_level": "raid1", 00:14:17.514 "superblock": false, 00:14:17.514 "num_base_bdevs": 4, 00:14:17.514 "num_base_bdevs_discovered": 3, 00:14:17.514 "num_base_bdevs_operational": 3, 00:14:17.514 "process": { 00:14:17.514 "type": "rebuild", 00:14:17.514 "target": "spare", 00:14:17.514 "progress": { 00:14:17.514 "blocks": 16384, 00:14:17.514 "percent": 25 00:14:17.514 } 00:14:17.514 }, 00:14:17.514 "base_bdevs_list": [ 00:14:17.514 { 00:14:17.514 "name": "spare", 00:14:17.514 "uuid": "25b11bb3-14eb-5641-8587-63dac5634cd0", 00:14:17.514 "is_configured": true, 00:14:17.514 "data_offset": 0, 00:14:17.514 "data_size": 65536 00:14:17.514 }, 00:14:17.514 { 00:14:17.514 "name": null, 00:14:17.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.514 "is_configured": false, 00:14:17.514 "data_offset": 0, 00:14:17.514 "data_size": 65536 00:14:17.514 }, 00:14:17.514 { 00:14:17.514 "name": "BaseBdev3", 00:14:17.514 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:17.514 "is_configured": true, 00:14:17.514 "data_offset": 0, 00:14:17.514 "data_size": 65536 00:14:17.514 }, 00:14:17.514 { 00:14:17.514 "name": "BaseBdev4", 00:14:17.514 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:17.514 "is_configured": true, 00:14:17.514 "data_offset": 0, 00:14:17.514 "data_size": 65536 00:14:17.514 } 00:14:17.514 ] 00:14:17.514 }' 00:14:17.514 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.774 08:50:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.774 127.75 IOPS, 383.25 MiB/s 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=486 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.774 "name": "raid_bdev1", 00:14:17.774 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:17.774 "strip_size_kb": 0, 00:14:17.774 "state": "online", 00:14:17.774 "raid_level": "raid1", 00:14:17.774 "superblock": false, 00:14:17.774 "num_base_bdevs": 4, 00:14:17.774 "num_base_bdevs_discovered": 3, 00:14:17.774 "num_base_bdevs_operational": 3, 00:14:17.774 "process": { 00:14:17.774 "type": "rebuild", 00:14:17.774 "target": "spare", 00:14:17.774 "progress": { 00:14:17.774 "blocks": 18432, 00:14:17.774 "percent": 28 00:14:17.774 } 00:14:17.774 }, 00:14:17.774 "base_bdevs_list": [ 00:14:17.774 { 00:14:17.774 "name": "spare", 00:14:17.774 "uuid": "25b11bb3-14eb-5641-8587-63dac5634cd0", 00:14:17.774 "is_configured": true, 00:14:17.774 "data_offset": 0, 00:14:17.774 "data_size": 65536 00:14:17.774 }, 00:14:17.774 { 00:14:17.774 "name": null, 00:14:17.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.774 "is_configured": false, 00:14:17.774 "data_offset": 0, 00:14:17.774 "data_size": 65536 00:14:17.774 }, 00:14:17.774 { 00:14:17.774 "name": "BaseBdev3", 00:14:17.774 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:17.774 "is_configured": true, 00:14:17.774 "data_offset": 0, 00:14:17.774 "data_size": 65536 00:14:17.774 }, 00:14:17.774 { 00:14:17.774 "name": "BaseBdev4", 00:14:17.774 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:17.774 "is_configured": true, 00:14:17.774 "data_offset": 0, 00:14:17.774 "data_size": 65536 00:14:17.774 } 00:14:17.774 ] 00:14:17.774 }' 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.774 [2024-10-05 08:50:54.180023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:17.774 [2024-10-05 08:50:54.180945] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.774 08:50:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.034 [2024-10-05 08:50:54.421189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:18.294 [2024-10-05 08:50:54.752263] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:18.813 115.00 IOPS, 345.00 MiB/s 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.813 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.813 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.813 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.813 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.813 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.813 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.813 08:50:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.813 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.813 08:50:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.813 08:50:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.813 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.813 "name": "raid_bdev1", 00:14:18.813 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:18.813 "strip_size_kb": 0, 00:14:18.813 "state": "online", 00:14:18.813 "raid_level": "raid1", 00:14:18.813 "superblock": false, 00:14:18.813 "num_base_bdevs": 4, 00:14:18.813 "num_base_bdevs_discovered": 3, 00:14:18.813 "num_base_bdevs_operational": 3, 00:14:18.813 "process": { 00:14:18.813 "type": "rebuild", 00:14:18.813 "target": "spare", 00:14:18.813 "progress": { 00:14:18.813 "blocks": 34816, 00:14:18.813 "percent": 53 00:14:18.813 } 00:14:18.813 }, 00:14:18.813 "base_bdevs_list": [ 00:14:18.813 { 00:14:18.813 "name": "spare", 00:14:18.813 "uuid": "25b11bb3-14eb-5641-8587-63dac5634cd0", 00:14:18.813 "is_configured": true, 00:14:18.813 "data_offset": 0, 00:14:18.813 "data_size": 65536 00:14:18.813 }, 00:14:18.813 { 00:14:18.813 "name": null, 00:14:18.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.813 "is_configured": false, 00:14:18.813 "data_offset": 0, 00:14:18.813 "data_size": 65536 00:14:18.813 }, 00:14:18.813 { 00:14:18.813 "name": "BaseBdev3", 00:14:18.813 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:18.813 "is_configured": true, 00:14:18.813 "data_offset": 0, 00:14:18.813 "data_size": 65536 00:14:18.813 }, 00:14:18.813 { 00:14:18.813 "name": "BaseBdev4", 00:14:18.813 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:18.813 "is_configured": true, 00:14:18.813 "data_offset": 0, 00:14:18.813 "data_size": 65536 00:14:18.813 } 00:14:18.813 ] 00:14:18.813 }' 00:14:18.813 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.073 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.073 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.073 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.073 08:50:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.332 [2024-10-05 08:50:55.760937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:19.591 [2024-10-05 08:50:55.887005] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:20.161 104.50 IOPS, 313.50 MiB/s [2024-10-05 08:50:56.326443] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.161 "name": "raid_bdev1", 00:14:20.161 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:20.161 "strip_size_kb": 0, 00:14:20.161 "state": "online", 00:14:20.161 "raid_level": "raid1", 00:14:20.161 "superblock": false, 00:14:20.161 "num_base_bdevs": 4, 00:14:20.161 "num_base_bdevs_discovered": 3, 00:14:20.161 "num_base_bdevs_operational": 3, 00:14:20.161 "process": { 00:14:20.161 "type": "rebuild", 00:14:20.161 "target": "spare", 00:14:20.161 "progress": { 00:14:20.161 "blocks": 53248, 00:14:20.161 "percent": 81 00:14:20.161 } 00:14:20.161 }, 00:14:20.161 "base_bdevs_list": [ 00:14:20.161 { 00:14:20.161 "name": "spare", 00:14:20.161 "uuid": "25b11bb3-14eb-5641-8587-63dac5634cd0", 00:14:20.161 "is_configured": true, 00:14:20.161 "data_offset": 0, 00:14:20.161 "data_size": 65536 00:14:20.161 }, 00:14:20.161 { 00:14:20.161 "name": null, 00:14:20.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.161 "is_configured": false, 00:14:20.161 "data_offset": 0, 00:14:20.161 "data_size": 65536 00:14:20.161 }, 00:14:20.161 { 00:14:20.161 "name": "BaseBdev3", 00:14:20.161 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:20.161 "is_configured": true, 00:14:20.161 "data_offset": 0, 00:14:20.161 "data_size": 65536 00:14:20.161 }, 00:14:20.161 { 00:14:20.161 "name": "BaseBdev4", 00:14:20.161 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:20.161 "is_configured": true, 00:14:20.161 "data_offset": 0, 00:14:20.161 "data_size": 65536 00:14:20.161 } 00:14:20.161 ] 00:14:20.161 }' 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.161 08:50:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.731 [2024-10-05 08:50:56.966996] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:20.731 95.71 IOPS, 287.14 MiB/s [2024-10-05 08:50:57.071761] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:20.731 [2024-10-05 08:50:57.075593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.301 "name": "raid_bdev1", 00:14:21.301 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:21.301 "strip_size_kb": 0, 00:14:21.301 "state": "online", 00:14:21.301 "raid_level": "raid1", 00:14:21.301 "superblock": false, 00:14:21.301 "num_base_bdevs": 4, 00:14:21.301 "num_base_bdevs_discovered": 3, 00:14:21.301 "num_base_bdevs_operational": 3, 00:14:21.301 "base_bdevs_list": [ 00:14:21.301 { 00:14:21.301 "name": "spare", 00:14:21.301 "uuid": "25b11bb3-14eb-5641-8587-63dac5634cd0", 00:14:21.301 "is_configured": true, 00:14:21.301 "data_offset": 0, 00:14:21.301 "data_size": 65536 00:14:21.301 }, 00:14:21.301 { 00:14:21.301 "name": null, 00:14:21.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.301 "is_configured": false, 00:14:21.301 "data_offset": 0, 00:14:21.301 "data_size": 65536 00:14:21.301 }, 00:14:21.301 { 00:14:21.301 "name": "BaseBdev3", 00:14:21.301 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:21.301 "is_configured": true, 00:14:21.301 "data_offset": 0, 00:14:21.301 "data_size": 65536 00:14:21.301 }, 00:14:21.301 { 00:14:21.301 "name": "BaseBdev4", 00:14:21.301 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:21.301 "is_configured": true, 00:14:21.301 "data_offset": 0, 00:14:21.301 "data_size": 65536 00:14:21.301 } 00:14:21.301 ] 00:14:21.301 }' 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.301 "name": "raid_bdev1", 00:14:21.301 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:21.301 "strip_size_kb": 0, 00:14:21.301 "state": "online", 00:14:21.301 "raid_level": "raid1", 00:14:21.301 "superblock": false, 00:14:21.301 "num_base_bdevs": 4, 00:14:21.301 "num_base_bdevs_discovered": 3, 00:14:21.301 "num_base_bdevs_operational": 3, 00:14:21.301 "base_bdevs_list": [ 00:14:21.301 { 00:14:21.301 "name": "spare", 00:14:21.301 "uuid": "25b11bb3-14eb-5641-8587-63dac5634cd0", 00:14:21.301 "is_configured": true, 00:14:21.301 "data_offset": 0, 00:14:21.301 "data_size": 65536 00:14:21.301 }, 00:14:21.301 { 00:14:21.301 "name": null, 00:14:21.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.301 "is_configured": false, 00:14:21.301 "data_offset": 0, 00:14:21.301 "data_size": 65536 00:14:21.301 }, 00:14:21.301 { 00:14:21.301 "name": "BaseBdev3", 00:14:21.301 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:21.301 "is_configured": true, 00:14:21.301 "data_offset": 0, 00:14:21.301 "data_size": 65536 00:14:21.301 }, 00:14:21.301 { 00:14:21.301 "name": "BaseBdev4", 00:14:21.301 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:21.301 "is_configured": true, 00:14:21.301 "data_offset": 0, 00:14:21.301 "data_size": 65536 00:14:21.301 } 00:14:21.301 ] 00:14:21.301 }' 00:14:21.301 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.302 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.302 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.561 "name": "raid_bdev1", 00:14:21.561 "uuid": "7b00c1e6-7c58-4211-9425-cefd67ddda99", 00:14:21.561 "strip_size_kb": 0, 00:14:21.561 "state": "online", 00:14:21.561 "raid_level": "raid1", 00:14:21.561 "superblock": false, 00:14:21.561 "num_base_bdevs": 4, 00:14:21.561 "num_base_bdevs_discovered": 3, 00:14:21.561 "num_base_bdevs_operational": 3, 00:14:21.561 "base_bdevs_list": [ 00:14:21.561 { 00:14:21.561 "name": "spare", 00:14:21.561 "uuid": "25b11bb3-14eb-5641-8587-63dac5634cd0", 00:14:21.561 "is_configured": true, 00:14:21.561 "data_offset": 0, 00:14:21.561 "data_size": 65536 00:14:21.561 }, 00:14:21.561 { 00:14:21.561 "name": null, 00:14:21.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.561 "is_configured": false, 00:14:21.561 "data_offset": 0, 00:14:21.561 "data_size": 65536 00:14:21.561 }, 00:14:21.561 { 00:14:21.561 "name": "BaseBdev3", 00:14:21.561 "uuid": "4b44ceba-efdd-5dd6-aa72-5aa467cf0265", 00:14:21.561 "is_configured": true, 00:14:21.561 "data_offset": 0, 00:14:21.561 "data_size": 65536 00:14:21.561 }, 00:14:21.561 { 00:14:21.561 "name": "BaseBdev4", 00:14:21.561 "uuid": "faf5c146-582f-5fc9-a7c6-31f8bc21c293", 00:14:21.561 "is_configured": true, 00:14:21.561 "data_offset": 0, 00:14:21.561 "data_size": 65536 00:14:21.561 } 00:14:21.561 ] 00:14:21.561 }' 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.561 08:50:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.821 88.00 IOPS, 264.00 MiB/s 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:21.821 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.821 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.821 [2024-10-05 08:50:58.176244] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:21.821 [2024-10-05 08:50:58.176358] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.821 00:14:21.821 Latency(us) 00:14:21.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.821 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:21.821 raid_bdev1 : 8.27 86.50 259.51 0.00 0.00 15923.01 291.55 114473.36 00:14:21.821 =================================================================================================================== 00:14:21.821 Total : 86.50 259.51 0.00 0.00 15923.01 291.55 114473.36 00:14:21.821 [2024-10-05 08:50:58.276372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.821 [2024-10-05 08:50:58.276457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.821 [2024-10-05 08:50:58.276564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:21.821 [2024-10-05 08:50:58.276613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:21.821 { 00:14:21.821 "results": [ 00:14:21.821 { 00:14:21.821 "job": "raid_bdev1", 00:14:21.821 "core_mask": "0x1", 00:14:21.821 "workload": "randrw", 00:14:21.821 "percentage": 50, 00:14:21.821 "status": "finished", 00:14:21.821 "queue_depth": 2, 00:14:21.821 "io_size": 3145728, 00:14:21.821 "runtime": 8.265491, 00:14:21.821 "iops": 86.50423792125598, 00:14:21.821 "mibps": 259.5127137637679, 00:14:21.821 "io_failed": 0, 00:14:21.821 "io_timeout": 0, 00:14:21.821 "avg_latency_us": 15923.007063853176, 00:14:21.821 "min_latency_us": 291.54934497816595, 00:14:21.821 "max_latency_us": 114473.36244541485 00:14:21.821 } 00:14:21.821 ], 00:14:21.821 "core_count": 1 00:14:21.821 } 00:14:21.821 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.821 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.821 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.821 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.821 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.080 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:22.080 /dev/nbd0 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.340 1+0 records in 00:14:22.340 1+0 records out 00:14:22.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412722 s, 9.9 MB/s 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.340 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:22.340 /dev/nbd1 00:14:22.600 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:22.600 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:22.600 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:22.600 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:22.600 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.600 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.600 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:22.600 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:22.601 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.601 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.601 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.601 1+0 records in 00:14:22.601 1+0 records out 00:14:22.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433392 s, 9.5 MB/s 00:14:22.601 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.601 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:22.601 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.601 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.601 08:50:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:22.601 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:22.601 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.601 08:50:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:22.601 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:22.601 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.601 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:22.601 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:22.601 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:22.601 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:22.601 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:22.860 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:22.860 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:22.860 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:22.861 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:23.121 /dev/nbd1 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.121 1+0 records in 00:14:23.121 1+0 records out 00:14:23.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038856 s, 10.5 MB/s 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.121 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:23.381 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76211 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76211 ']' 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76211 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:14:23.641 08:50:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.641 08:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76211 00:14:23.641 killing process with pid 76211 00:14:23.641 Received shutdown signal, test time was about 10.050655 seconds 00:14:23.641 00:14:23.641 Latency(us) 00:14:23.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.641 =================================================================================================================== 00:14:23.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:23.641 08:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:23.641 08:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:23.641 08:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76211' 00:14:23.641 08:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76211 00:14:23.641 [2024-10-05 08:51:00.037760] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.641 08:51:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76211 00:14:24.210 [2024-10-05 08:51:00.439469] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:25.620 00:14:25.620 real 0m13.596s 00:14:25.620 user 0m16.965s 00:14:25.620 sys 0m1.882s 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.620 ************************************ 00:14:25.620 END TEST raid_rebuild_test_io 00:14:25.620 ************************************ 00:14:25.620 08:51:01 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:25.620 08:51:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:25.620 08:51:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:25.620 08:51:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:25.620 ************************************ 00:14:25.620 START TEST raid_rebuild_test_sb_io 00:14:25.620 ************************************ 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76542 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76542 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 76542 ']' 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.620 08:51:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.620 [2024-10-05 08:51:01.900687] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:14:25.620 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:25.620 Zero copy mechanism will not be used. 00:14:25.620 [2024-10-05 08:51:01.901350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76542 ] 00:14:25.620 [2024-10-05 08:51:02.065323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.880 [2024-10-05 08:51:02.261455] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.139 [2024-10-05 08:51:02.461010] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.139 [2024-10-05 08:51:02.461119] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.398 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.398 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:14:26.398 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:26.398 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:26.398 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.398 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.398 BaseBdev1_malloc 00:14:26.398 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.399 [2024-10-05 08:51:02.753822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:26.399 [2024-10-05 08:51:02.753984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.399 [2024-10-05 08:51:02.754029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:26.399 [2024-10-05 08:51:02.754064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.399 [2024-10-05 08:51:02.756076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.399 [2024-10-05 08:51:02.756149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:26.399 BaseBdev1 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.399 BaseBdev2_malloc 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.399 [2024-10-05 08:51:02.839485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:26.399 [2024-10-05 08:51:02.839605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.399 [2024-10-05 08:51:02.839641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:26.399 [2024-10-05 08:51:02.839671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.399 [2024-10-05 08:51:02.841687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.399 [2024-10-05 08:51:02.841764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:26.399 BaseBdev2 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.399 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.659 BaseBdev3_malloc 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.659 [2024-10-05 08:51:02.892257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:26.659 [2024-10-05 08:51:02.892322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.659 [2024-10-05 08:51:02.892344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:26.659 [2024-10-05 08:51:02.892355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.659 [2024-10-05 08:51:02.894267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.659 [2024-10-05 08:51:02.894318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:26.659 BaseBdev3 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.659 BaseBdev4_malloc 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.659 [2024-10-05 08:51:02.944427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:26.659 [2024-10-05 08:51:02.944484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.659 [2024-10-05 08:51:02.944503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:26.659 [2024-10-05 08:51:02.944513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.659 [2024-10-05 08:51:02.946482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.659 [2024-10-05 08:51:02.946524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:26.659 BaseBdev4 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.659 spare_malloc 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.659 08:51:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.659 spare_delay 00:14:26.659 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.659 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:26.659 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.659 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.659 [2024-10-05 08:51:03.010312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:26.659 [2024-10-05 08:51:03.010380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.659 [2024-10-05 08:51:03.010400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:26.659 [2024-10-05 08:51:03.010410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.659 [2024-10-05 08:51:03.012315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.659 [2024-10-05 08:51:03.012356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:26.659 spare 00:14:26.659 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.659 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:26.659 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.659 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.659 [2024-10-05 08:51:03.022342] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.659 [2024-10-05 08:51:03.024029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.659 [2024-10-05 08:51:03.024093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:26.659 [2024-10-05 08:51:03.024143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:26.660 [2024-10-05 08:51:03.024307] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:26.660 [2024-10-05 08:51:03.024319] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:26.660 [2024-10-05 08:51:03.024546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:26.660 [2024-10-05 08:51:03.024702] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:26.660 [2024-10-05 08:51:03.024712] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:26.660 [2024-10-05 08:51:03.024854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.660 "name": "raid_bdev1", 00:14:26.660 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:26.660 "strip_size_kb": 0, 00:14:26.660 "state": "online", 00:14:26.660 "raid_level": "raid1", 00:14:26.660 "superblock": true, 00:14:26.660 "num_base_bdevs": 4, 00:14:26.660 "num_base_bdevs_discovered": 4, 00:14:26.660 "num_base_bdevs_operational": 4, 00:14:26.660 "base_bdevs_list": [ 00:14:26.660 { 00:14:26.660 "name": "BaseBdev1", 00:14:26.660 "uuid": "b8368730-0215-5ba6-8ec4-32f89e341226", 00:14:26.660 "is_configured": true, 00:14:26.660 "data_offset": 2048, 00:14:26.660 "data_size": 63488 00:14:26.660 }, 00:14:26.660 { 00:14:26.660 "name": "BaseBdev2", 00:14:26.660 "uuid": "b7064acc-620e-5fde-9998-46ca7fc4f5ed", 00:14:26.660 "is_configured": true, 00:14:26.660 "data_offset": 2048, 00:14:26.660 "data_size": 63488 00:14:26.660 }, 00:14:26.660 { 00:14:26.660 "name": "BaseBdev3", 00:14:26.660 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:26.660 "is_configured": true, 00:14:26.660 "data_offset": 2048, 00:14:26.660 "data_size": 63488 00:14:26.660 }, 00:14:26.660 { 00:14:26.660 "name": "BaseBdev4", 00:14:26.660 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:26.660 "is_configured": true, 00:14:26.660 "data_offset": 2048, 00:14:26.660 "data_size": 63488 00:14:26.660 } 00:14:26.660 ] 00:14:26.660 }' 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.660 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.229 [2024-10-05 08:51:03.429896] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.229 [2024-10-05 08:51:03.521435] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.229 "name": "raid_bdev1", 00:14:27.229 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:27.229 "strip_size_kb": 0, 00:14:27.229 "state": "online", 00:14:27.229 "raid_level": "raid1", 00:14:27.229 "superblock": true, 00:14:27.229 "num_base_bdevs": 4, 00:14:27.229 "num_base_bdevs_discovered": 3, 00:14:27.229 "num_base_bdevs_operational": 3, 00:14:27.229 "base_bdevs_list": [ 00:14:27.229 { 00:14:27.229 "name": null, 00:14:27.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.229 "is_configured": false, 00:14:27.229 "data_offset": 0, 00:14:27.229 "data_size": 63488 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "name": "BaseBdev2", 00:14:27.229 "uuid": "b7064acc-620e-5fde-9998-46ca7fc4f5ed", 00:14:27.229 "is_configured": true, 00:14:27.229 "data_offset": 2048, 00:14:27.229 "data_size": 63488 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "name": "BaseBdev3", 00:14:27.229 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:27.229 "is_configured": true, 00:14:27.229 "data_offset": 2048, 00:14:27.229 "data_size": 63488 00:14:27.229 }, 00:14:27.229 { 00:14:27.229 "name": "BaseBdev4", 00:14:27.229 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:27.229 "is_configured": true, 00:14:27.229 "data_offset": 2048, 00:14:27.229 "data_size": 63488 00:14:27.229 } 00:14:27.229 ] 00:14:27.229 }' 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.229 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.229 [2024-10-05 08:51:03.616144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:27.229 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:27.229 Zero copy mechanism will not be used. 00:14:27.229 Running I/O for 60 seconds... 00:14:27.489 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:27.489 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.489 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.489 [2024-10-05 08:51:03.952049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:27.748 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.748 08:51:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:27.748 [2024-10-05 08:51:04.012796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:27.748 [2024-10-05 08:51:04.014709] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.748 [2024-10-05 08:51:04.131277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:27.748 [2024-10-05 08:51:04.131834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:28.008 [2024-10-05 08:51:04.355934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:28.008 [2024-10-05 08:51:04.356343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:28.268 143.00 IOPS, 429.00 MiB/s [2024-10-05 08:51:04.713507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:28.528 [2024-10-05 08:51:04.835881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:28.528 08:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.528 08:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.528 08:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.528 08:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.528 08:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.787 08:51:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.787 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.787 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.787 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.787 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.787 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.787 "name": "raid_bdev1", 00:14:28.787 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:28.787 "strip_size_kb": 0, 00:14:28.787 "state": "online", 00:14:28.787 "raid_level": "raid1", 00:14:28.788 "superblock": true, 00:14:28.788 "num_base_bdevs": 4, 00:14:28.788 "num_base_bdevs_discovered": 4, 00:14:28.788 "num_base_bdevs_operational": 4, 00:14:28.788 "process": { 00:14:28.788 "type": "rebuild", 00:14:28.788 "target": "spare", 00:14:28.788 "progress": { 00:14:28.788 "blocks": 12288, 00:14:28.788 "percent": 19 00:14:28.788 } 00:14:28.788 }, 00:14:28.788 "base_bdevs_list": [ 00:14:28.788 { 00:14:28.788 "name": "spare", 00:14:28.788 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:28.788 "is_configured": true, 00:14:28.788 "data_offset": 2048, 00:14:28.788 "data_size": 63488 00:14:28.788 }, 00:14:28.788 { 00:14:28.788 "name": "BaseBdev2", 00:14:28.788 "uuid": "b7064acc-620e-5fde-9998-46ca7fc4f5ed", 00:14:28.788 "is_configured": true, 00:14:28.788 "data_offset": 2048, 00:14:28.788 "data_size": 63488 00:14:28.788 }, 00:14:28.788 { 00:14:28.788 "name": "BaseBdev3", 00:14:28.788 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:28.788 "is_configured": true, 00:14:28.788 "data_offset": 2048, 00:14:28.788 "data_size": 63488 00:14:28.788 }, 00:14:28.788 { 00:14:28.788 "name": "BaseBdev4", 00:14:28.788 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:28.788 "is_configured": true, 00:14:28.788 "data_offset": 2048, 00:14:28.788 "data_size": 63488 00:14:28.788 } 00:14:28.788 ] 00:14:28.788 }' 00:14:28.788 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.788 [2024-10-05 08:51:05.061808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:28.788 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.788 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.788 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.788 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:28.788 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.788 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.788 [2024-10-05 08:51:05.149656] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.047 [2024-10-05 08:51:05.308417] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:29.048 [2024-10-05 08:51:05.318077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.048 [2024-10-05 08:51:05.318132] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:29.048 [2024-10-05 08:51:05.318144] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:29.048 [2024-10-05 08:51:05.346028] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.048 "name": "raid_bdev1", 00:14:29.048 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:29.048 "strip_size_kb": 0, 00:14:29.048 "state": "online", 00:14:29.048 "raid_level": "raid1", 00:14:29.048 "superblock": true, 00:14:29.048 "num_base_bdevs": 4, 00:14:29.048 "num_base_bdevs_discovered": 3, 00:14:29.048 "num_base_bdevs_operational": 3, 00:14:29.048 "base_bdevs_list": [ 00:14:29.048 { 00:14:29.048 "name": null, 00:14:29.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.048 "is_configured": false, 00:14:29.048 "data_offset": 0, 00:14:29.048 "data_size": 63488 00:14:29.048 }, 00:14:29.048 { 00:14:29.048 "name": "BaseBdev2", 00:14:29.048 "uuid": "b7064acc-620e-5fde-9998-46ca7fc4f5ed", 00:14:29.048 "is_configured": true, 00:14:29.048 "data_offset": 2048, 00:14:29.048 "data_size": 63488 00:14:29.048 }, 00:14:29.048 { 00:14:29.048 "name": "BaseBdev3", 00:14:29.048 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:29.048 "is_configured": true, 00:14:29.048 "data_offset": 2048, 00:14:29.048 "data_size": 63488 00:14:29.048 }, 00:14:29.048 { 00:14:29.048 "name": "BaseBdev4", 00:14:29.048 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:29.048 "is_configured": true, 00:14:29.048 "data_offset": 2048, 00:14:29.048 "data_size": 63488 00:14:29.048 } 00:14:29.048 ] 00:14:29.048 }' 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.048 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.567 144.50 IOPS, 433.50 MiB/s 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.567 "name": "raid_bdev1", 00:14:29.567 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:29.567 "strip_size_kb": 0, 00:14:29.567 "state": "online", 00:14:29.567 "raid_level": "raid1", 00:14:29.567 "superblock": true, 00:14:29.567 "num_base_bdevs": 4, 00:14:29.567 "num_base_bdevs_discovered": 3, 00:14:29.567 "num_base_bdevs_operational": 3, 00:14:29.567 "base_bdevs_list": [ 00:14:29.567 { 00:14:29.567 "name": null, 00:14:29.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.567 "is_configured": false, 00:14:29.567 "data_offset": 0, 00:14:29.567 "data_size": 63488 00:14:29.567 }, 00:14:29.567 { 00:14:29.567 "name": "BaseBdev2", 00:14:29.567 "uuid": "b7064acc-620e-5fde-9998-46ca7fc4f5ed", 00:14:29.567 "is_configured": true, 00:14:29.567 "data_offset": 2048, 00:14:29.567 "data_size": 63488 00:14:29.567 }, 00:14:29.567 { 00:14:29.567 "name": "BaseBdev3", 00:14:29.567 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:29.567 "is_configured": true, 00:14:29.567 "data_offset": 2048, 00:14:29.567 "data_size": 63488 00:14:29.567 }, 00:14:29.567 { 00:14:29.567 "name": "BaseBdev4", 00:14:29.567 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:29.567 "is_configured": true, 00:14:29.567 "data_offset": 2048, 00:14:29.567 "data_size": 63488 00:14:29.567 } 00:14:29.567 ] 00:14:29.567 }' 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.567 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.567 [2024-10-05 08:51:05.942167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:29.568 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.568 08:51:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:29.568 [2024-10-05 08:51:05.993918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:29.568 [2024-10-05 08:51:05.995773] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.827 [2024-10-05 08:51:06.115866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:29.827 [2024-10-05 08:51:06.116270] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:30.087 [2024-10-05 08:51:06.323972] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:30.087 [2024-10-05 08:51:06.324626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:30.346 137.33 IOPS, 412.00 MiB/s [2024-10-05 08:51:06.671191] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:30.606 [2024-10-05 08:51:06.893965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:30.606 08:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.606 08:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.606 08:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.606 08:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.606 08:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.606 08:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.606 08:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.606 08:51:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.606 08:51:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.606 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.606 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.606 "name": "raid_bdev1", 00:14:30.606 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:30.606 "strip_size_kb": 0, 00:14:30.606 "state": "online", 00:14:30.606 "raid_level": "raid1", 00:14:30.606 "superblock": true, 00:14:30.606 "num_base_bdevs": 4, 00:14:30.606 "num_base_bdevs_discovered": 4, 00:14:30.606 "num_base_bdevs_operational": 4, 00:14:30.606 "process": { 00:14:30.606 "type": "rebuild", 00:14:30.606 "target": "spare", 00:14:30.606 "progress": { 00:14:30.606 "blocks": 10240, 00:14:30.606 "percent": 16 00:14:30.606 } 00:14:30.606 }, 00:14:30.606 "base_bdevs_list": [ 00:14:30.606 { 00:14:30.606 "name": "spare", 00:14:30.606 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:30.606 "is_configured": true, 00:14:30.606 "data_offset": 2048, 00:14:30.606 "data_size": 63488 00:14:30.606 }, 00:14:30.606 { 00:14:30.606 "name": "BaseBdev2", 00:14:30.606 "uuid": "b7064acc-620e-5fde-9998-46ca7fc4f5ed", 00:14:30.606 "is_configured": true, 00:14:30.606 "data_offset": 2048, 00:14:30.606 "data_size": 63488 00:14:30.606 }, 00:14:30.606 { 00:14:30.606 "name": "BaseBdev3", 00:14:30.606 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:30.606 "is_configured": true, 00:14:30.606 "data_offset": 2048, 00:14:30.606 "data_size": 63488 00:14:30.606 }, 00:14:30.606 { 00:14:30.606 "name": "BaseBdev4", 00:14:30.606 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:30.606 "is_configured": true, 00:14:30.606 "data_offset": 2048, 00:14:30.606 "data_size": 63488 00:14:30.606 } 00:14:30.606 ] 00:14:30.606 }' 00:14:30.606 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.865 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:30.866 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.866 [2024-10-05 08:51:07.136649] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:30.866 [2024-10-05 08:51:07.237958] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:30.866 [2024-10-05 08:51:07.237999] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.866 "name": "raid_bdev1", 00:14:30.866 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:30.866 "strip_size_kb": 0, 00:14:30.866 "state": "online", 00:14:30.866 "raid_level": "raid1", 00:14:30.866 "superblock": true, 00:14:30.866 "num_base_bdevs": 4, 00:14:30.866 "num_base_bdevs_discovered": 3, 00:14:30.866 "num_base_bdevs_operational": 3, 00:14:30.866 "process": { 00:14:30.866 "type": "rebuild", 00:14:30.866 "target": "spare", 00:14:30.866 "progress": { 00:14:30.866 "blocks": 14336, 00:14:30.866 "percent": 22 00:14:30.866 } 00:14:30.866 }, 00:14:30.866 "base_bdevs_list": [ 00:14:30.866 { 00:14:30.866 "name": "spare", 00:14:30.866 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:30.866 "is_configured": true, 00:14:30.866 "data_offset": 2048, 00:14:30.866 "data_size": 63488 00:14:30.866 }, 00:14:30.866 { 00:14:30.866 "name": null, 00:14:30.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.866 "is_configured": false, 00:14:30.866 "data_offset": 0, 00:14:30.866 "data_size": 63488 00:14:30.866 }, 00:14:30.866 { 00:14:30.866 "name": "BaseBdev3", 00:14:30.866 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:30.866 "is_configured": true, 00:14:30.866 "data_offset": 2048, 00:14:30.866 "data_size": 63488 00:14:30.866 }, 00:14:30.866 { 00:14:30.866 "name": "BaseBdev4", 00:14:30.866 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:30.866 "is_configured": true, 00:14:30.866 "data_offset": 2048, 00:14:30.866 "data_size": 63488 00:14:30.866 } 00:14:30.866 ] 00:14:30.866 }' 00:14:30.866 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.125 [2024-10-05 08:51:07.351870] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:31.125 [2024-10-05 08:51:07.352260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=499 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.125 "name": "raid_bdev1", 00:14:31.125 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:31.125 "strip_size_kb": 0, 00:14:31.125 "state": "online", 00:14:31.125 "raid_level": "raid1", 00:14:31.125 "superblock": true, 00:14:31.125 "num_base_bdevs": 4, 00:14:31.125 "num_base_bdevs_discovered": 3, 00:14:31.125 "num_base_bdevs_operational": 3, 00:14:31.125 "process": { 00:14:31.125 "type": "rebuild", 00:14:31.125 "target": "spare", 00:14:31.125 "progress": { 00:14:31.125 "blocks": 16384, 00:14:31.125 "percent": 25 00:14:31.125 } 00:14:31.125 }, 00:14:31.125 "base_bdevs_list": [ 00:14:31.125 { 00:14:31.125 "name": "spare", 00:14:31.125 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:31.125 "is_configured": true, 00:14:31.125 "data_offset": 2048, 00:14:31.125 "data_size": 63488 00:14:31.125 }, 00:14:31.125 { 00:14:31.125 "name": null, 00:14:31.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.125 "is_configured": false, 00:14:31.125 "data_offset": 0, 00:14:31.125 "data_size": 63488 00:14:31.125 }, 00:14:31.125 { 00:14:31.125 "name": "BaseBdev3", 00:14:31.125 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:31.125 "is_configured": true, 00:14:31.125 "data_offset": 2048, 00:14:31.125 "data_size": 63488 00:14:31.125 }, 00:14:31.125 { 00:14:31.125 "name": "BaseBdev4", 00:14:31.125 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:31.125 "is_configured": true, 00:14:31.125 "data_offset": 2048, 00:14:31.125 "data_size": 63488 00:14:31.125 } 00:14:31.125 ] 00:14:31.125 }' 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.125 08:51:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.384 118.00 IOPS, 354.00 MiB/s [2024-10-05 08:51:07.675246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:31.384 [2024-10-05 08:51:07.782852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:31.385 [2024-10-05 08:51:07.783067] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:31.995 [2024-10-05 08:51:08.360543] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.254 "name": "raid_bdev1", 00:14:32.254 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:32.254 "strip_size_kb": 0, 00:14:32.254 "state": "online", 00:14:32.254 "raid_level": "raid1", 00:14:32.254 "superblock": true, 00:14:32.254 "num_base_bdevs": 4, 00:14:32.254 "num_base_bdevs_discovered": 3, 00:14:32.254 "num_base_bdevs_operational": 3, 00:14:32.254 "process": { 00:14:32.254 "type": "rebuild", 00:14:32.254 "target": "spare", 00:14:32.254 "progress": { 00:14:32.254 "blocks": 32768, 00:14:32.254 "percent": 51 00:14:32.254 } 00:14:32.254 }, 00:14:32.254 "base_bdevs_list": [ 00:14:32.254 { 00:14:32.254 "name": "spare", 00:14:32.254 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:32.254 "is_configured": true, 00:14:32.254 "data_offset": 2048, 00:14:32.254 "data_size": 63488 00:14:32.254 }, 00:14:32.254 { 00:14:32.254 "name": null, 00:14:32.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.254 "is_configured": false, 00:14:32.254 "data_offset": 0, 00:14:32.254 "data_size": 63488 00:14:32.254 }, 00:14:32.254 { 00:14:32.254 "name": "BaseBdev3", 00:14:32.254 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:32.254 "is_configured": true, 00:14:32.254 "data_offset": 2048, 00:14:32.254 "data_size": 63488 00:14:32.254 }, 00:14:32.254 { 00:14:32.254 "name": "BaseBdev4", 00:14:32.254 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:32.254 "is_configured": true, 00:14:32.254 "data_offset": 2048, 00:14:32.254 "data_size": 63488 00:14:32.254 } 00:14:32.254 ] 00:14:32.254 }' 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.254 103.60 IOPS, 310.80 MiB/s 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.254 08:51:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:32.511 [2024-10-05 08:51:08.815923] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:32.511 [2024-10-05 08:51:08.816816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:33.078 [2024-10-05 08:51:09.361545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:33.336 [2024-10-05 08:51:09.581161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:33.336 93.17 IOPS, 279.50 MiB/s 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.336 "name": "raid_bdev1", 00:14:33.336 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:33.336 "strip_size_kb": 0, 00:14:33.336 "state": "online", 00:14:33.336 "raid_level": "raid1", 00:14:33.336 "superblock": true, 00:14:33.336 "num_base_bdevs": 4, 00:14:33.336 "num_base_bdevs_discovered": 3, 00:14:33.336 "num_base_bdevs_operational": 3, 00:14:33.336 "process": { 00:14:33.336 "type": "rebuild", 00:14:33.336 "target": "spare", 00:14:33.336 "progress": { 00:14:33.336 "blocks": 51200, 00:14:33.336 "percent": 80 00:14:33.336 } 00:14:33.336 }, 00:14:33.336 "base_bdevs_list": [ 00:14:33.336 { 00:14:33.336 "name": "spare", 00:14:33.336 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:33.336 "is_configured": true, 00:14:33.336 "data_offset": 2048, 00:14:33.336 "data_size": 63488 00:14:33.336 }, 00:14:33.336 { 00:14:33.336 "name": null, 00:14:33.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.336 "is_configured": false, 00:14:33.336 "data_offset": 0, 00:14:33.336 "data_size": 63488 00:14:33.336 }, 00:14:33.336 { 00:14:33.336 "name": "BaseBdev3", 00:14:33.336 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:33.336 "is_configured": true, 00:14:33.336 "data_offset": 2048, 00:14:33.336 "data_size": 63488 00:14:33.336 }, 00:14:33.336 { 00:14:33.336 "name": "BaseBdev4", 00:14:33.336 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:33.336 "is_configured": true, 00:14:33.336 "data_offset": 2048, 00:14:33.336 "data_size": 63488 00:14:33.336 } 00:14:33.336 ] 00:14:33.336 }' 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.336 08:51:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.903 [2024-10-05 08:51:10.286543] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:33.903 [2024-10-05 08:51:10.344360] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:33.903 [2024-10-05 08:51:10.348169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.423 84.71 IOPS, 254.14 MiB/s 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.424 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.424 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.424 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.424 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.424 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.424 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.424 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.424 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.424 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.424 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.424 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.424 "name": "raid_bdev1", 00:14:34.424 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:34.424 "strip_size_kb": 0, 00:14:34.424 "state": "online", 00:14:34.424 "raid_level": "raid1", 00:14:34.424 "superblock": true, 00:14:34.424 "num_base_bdevs": 4, 00:14:34.424 "num_base_bdevs_discovered": 3, 00:14:34.424 "num_base_bdevs_operational": 3, 00:14:34.424 "base_bdevs_list": [ 00:14:34.424 { 00:14:34.424 "name": "spare", 00:14:34.424 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:34.424 "is_configured": true, 00:14:34.424 "data_offset": 2048, 00:14:34.424 "data_size": 63488 00:14:34.424 }, 00:14:34.424 { 00:14:34.424 "name": null, 00:14:34.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.424 "is_configured": false, 00:14:34.424 "data_offset": 0, 00:14:34.424 "data_size": 63488 00:14:34.424 }, 00:14:34.424 { 00:14:34.424 "name": "BaseBdev3", 00:14:34.424 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:34.424 "is_configured": true, 00:14:34.424 "data_offset": 2048, 00:14:34.424 "data_size": 63488 00:14:34.424 }, 00:14:34.424 { 00:14:34.424 "name": "BaseBdev4", 00:14:34.424 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:34.424 "is_configured": true, 00:14:34.424 "data_offset": 2048, 00:14:34.424 "data_size": 63488 00:14:34.424 } 00:14:34.424 ] 00:14:34.424 }' 00:14:34.424 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.684 08:51:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.684 "name": "raid_bdev1", 00:14:34.684 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:34.684 "strip_size_kb": 0, 00:14:34.684 "state": "online", 00:14:34.684 "raid_level": "raid1", 00:14:34.684 "superblock": true, 00:14:34.684 "num_base_bdevs": 4, 00:14:34.684 "num_base_bdevs_discovered": 3, 00:14:34.684 "num_base_bdevs_operational": 3, 00:14:34.684 "base_bdevs_list": [ 00:14:34.684 { 00:14:34.684 "name": "spare", 00:14:34.684 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:34.684 "is_configured": true, 00:14:34.684 "data_offset": 2048, 00:14:34.684 "data_size": 63488 00:14:34.684 }, 00:14:34.684 { 00:14:34.684 "name": null, 00:14:34.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.684 "is_configured": false, 00:14:34.684 "data_offset": 0, 00:14:34.684 "data_size": 63488 00:14:34.684 }, 00:14:34.684 { 00:14:34.684 "name": "BaseBdev3", 00:14:34.684 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:34.684 "is_configured": true, 00:14:34.684 "data_offset": 2048, 00:14:34.684 "data_size": 63488 00:14:34.684 }, 00:14:34.684 { 00:14:34.684 "name": "BaseBdev4", 00:14:34.684 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:34.684 "is_configured": true, 00:14:34.684 "data_offset": 2048, 00:14:34.684 "data_size": 63488 00:14:34.684 } 00:14:34.684 ] 00:14:34.684 }' 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.684 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.685 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.685 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.685 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.685 "name": "raid_bdev1", 00:14:34.685 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:34.685 "strip_size_kb": 0, 00:14:34.685 "state": "online", 00:14:34.685 "raid_level": "raid1", 00:14:34.685 "superblock": true, 00:14:34.685 "num_base_bdevs": 4, 00:14:34.685 "num_base_bdevs_discovered": 3, 00:14:34.685 "num_base_bdevs_operational": 3, 00:14:34.685 "base_bdevs_list": [ 00:14:34.685 { 00:14:34.685 "name": "spare", 00:14:34.685 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:34.685 "is_configured": true, 00:14:34.685 "data_offset": 2048, 00:14:34.685 "data_size": 63488 00:14:34.685 }, 00:14:34.685 { 00:14:34.685 "name": null, 00:14:34.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.685 "is_configured": false, 00:14:34.685 "data_offset": 0, 00:14:34.685 "data_size": 63488 00:14:34.685 }, 00:14:34.685 { 00:14:34.685 "name": "BaseBdev3", 00:14:34.685 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:34.685 "is_configured": true, 00:14:34.685 "data_offset": 2048, 00:14:34.685 "data_size": 63488 00:14:34.685 }, 00:14:34.685 { 00:14:34.685 "name": "BaseBdev4", 00:14:34.685 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:34.685 "is_configured": true, 00:14:34.685 "data_offset": 2048, 00:14:34.685 "data_size": 63488 00:14:34.685 } 00:14:34.685 ] 00:14:34.685 }' 00:14:34.685 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.685 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.255 [2024-10-05 08:51:11.588316] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.255 [2024-10-05 08:51:11.588353] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.255 77.62 IOPS, 232.88 MiB/s 00:14:35.255 Latency(us) 00:14:35.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.255 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:35.255 raid_bdev1 : 8.04 77.51 232.53 0.00 0.00 17896.59 339.84 117220.72 00:14:35.255 =================================================================================================================== 00:14:35.255 Total : 77.51 232.53 0.00 0.00 17896.59 339.84 117220.72 00:14:35.255 [2024-10-05 08:51:11.659952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.255 [2024-10-05 08:51:11.660011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.255 [2024-10-05 08:51:11.660119] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.255 [2024-10-05 08:51:11.660128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:35.255 { 00:14:35.255 "results": [ 00:14:35.255 { 00:14:35.255 "job": "raid_bdev1", 00:14:35.255 "core_mask": "0x1", 00:14:35.255 "workload": "randrw", 00:14:35.255 "percentage": 50, 00:14:35.255 "status": "finished", 00:14:35.255 "queue_depth": 2, 00:14:35.255 "io_size": 3145728, 00:14:35.255 "runtime": 8.037605, 00:14:35.255 "iops": 77.51065149382185, 00:14:35.255 "mibps": 232.53195448146556, 00:14:35.255 "io_failed": 0, 00:14:35.255 "io_timeout": 0, 00:14:35.255 "avg_latency_us": 17896.59328085682, 00:14:35.255 "min_latency_us": 339.8427947598253, 00:14:35.255 "max_latency_us": 117220.7231441048 00:14:35.255 } 00:14:35.255 ], 00:14:35.255 "core_count": 1 00:14:35.255 } 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.255 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:35.515 /dev/nbd0 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.515 1+0 records in 00:14:35.515 1+0 records out 00:14:35.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393671 s, 10.4 MB/s 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:35.515 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.775 08:51:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:35.775 /dev/nbd1 00:14:35.775 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:35.775 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:35.775 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:35.775 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:35.775 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:35.775 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:35.775 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:35.775 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:35.775 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:35.775 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:35.775 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.775 1+0 records in 00:14:35.775 1+0 records out 00:14:35.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336603 s, 12.2 MB/s 00:14:35.775 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.776 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:35.776 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.776 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:35.776 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:35.776 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.776 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.776 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:36.035 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:36.035 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.035 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:36.035 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.035 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:36.035 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.035 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.296 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:36.556 /dev/nbd1 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.556 1+0 records in 00:14:36.556 1+0 records out 00:14:36.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386149 s, 10.6 MB/s 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.556 08:51:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.816 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.076 [2024-10-05 08:51:13.389310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.076 [2024-10-05 08:51:13.389369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.076 [2024-10-05 08:51:13.389394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:37.076 [2024-10-05 08:51:13.389404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.076 [2024-10-05 08:51:13.391410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.076 [2024-10-05 08:51:13.391446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.076 [2024-10-05 08:51:13.391526] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:37.076 [2024-10-05 08:51:13.391572] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.076 [2024-10-05 08:51:13.391721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.076 [2024-10-05 08:51:13.391833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:37.076 spare 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.076 [2024-10-05 08:51:13.491722] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:37.076 [2024-10-05 08:51:13.491748] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:37.076 [2024-10-05 08:51:13.492005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:37.076 [2024-10-05 08:51:13.492168] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:37.076 [2024-10-05 08:51:13.492187] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:37.076 [2024-10-05 08:51:13.492321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.076 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.336 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.336 "name": "raid_bdev1", 00:14:37.336 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:37.336 "strip_size_kb": 0, 00:14:37.336 "state": "online", 00:14:37.336 "raid_level": "raid1", 00:14:37.336 "superblock": true, 00:14:37.336 "num_base_bdevs": 4, 00:14:37.336 "num_base_bdevs_discovered": 3, 00:14:37.336 "num_base_bdevs_operational": 3, 00:14:37.336 "base_bdevs_list": [ 00:14:37.336 { 00:14:37.336 "name": "spare", 00:14:37.336 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:37.336 "is_configured": true, 00:14:37.336 "data_offset": 2048, 00:14:37.336 "data_size": 63488 00:14:37.336 }, 00:14:37.336 { 00:14:37.336 "name": null, 00:14:37.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.336 "is_configured": false, 00:14:37.336 "data_offset": 2048, 00:14:37.336 "data_size": 63488 00:14:37.336 }, 00:14:37.336 { 00:14:37.336 "name": "BaseBdev3", 00:14:37.336 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:37.336 "is_configured": true, 00:14:37.336 "data_offset": 2048, 00:14:37.336 "data_size": 63488 00:14:37.336 }, 00:14:37.336 { 00:14:37.336 "name": "BaseBdev4", 00:14:37.336 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:37.336 "is_configured": true, 00:14:37.336 "data_offset": 2048, 00:14:37.336 "data_size": 63488 00:14:37.336 } 00:14:37.336 ] 00:14:37.336 }' 00:14:37.336 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.336 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.595 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.595 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.595 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.595 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.595 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.595 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.595 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.595 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.595 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.595 08:51:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.595 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.595 "name": "raid_bdev1", 00:14:37.595 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:37.595 "strip_size_kb": 0, 00:14:37.595 "state": "online", 00:14:37.595 "raid_level": "raid1", 00:14:37.595 "superblock": true, 00:14:37.595 "num_base_bdevs": 4, 00:14:37.595 "num_base_bdevs_discovered": 3, 00:14:37.595 "num_base_bdevs_operational": 3, 00:14:37.595 "base_bdevs_list": [ 00:14:37.595 { 00:14:37.595 "name": "spare", 00:14:37.595 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:37.595 "is_configured": true, 00:14:37.595 "data_offset": 2048, 00:14:37.595 "data_size": 63488 00:14:37.595 }, 00:14:37.595 { 00:14:37.595 "name": null, 00:14:37.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.595 "is_configured": false, 00:14:37.595 "data_offset": 2048, 00:14:37.595 "data_size": 63488 00:14:37.595 }, 00:14:37.595 { 00:14:37.595 "name": "BaseBdev3", 00:14:37.595 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:37.595 "is_configured": true, 00:14:37.595 "data_offset": 2048, 00:14:37.595 "data_size": 63488 00:14:37.595 }, 00:14:37.595 { 00:14:37.595 "name": "BaseBdev4", 00:14:37.595 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:37.595 "is_configured": true, 00:14:37.595 "data_offset": 2048, 00:14:37.595 "data_size": 63488 00:14:37.595 } 00:14:37.595 ] 00:14:37.595 }' 00:14:37.595 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.855 [2024-10-05 08:51:14.165064] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.855 "name": "raid_bdev1", 00:14:37.855 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:37.855 "strip_size_kb": 0, 00:14:37.855 "state": "online", 00:14:37.855 "raid_level": "raid1", 00:14:37.855 "superblock": true, 00:14:37.855 "num_base_bdevs": 4, 00:14:37.855 "num_base_bdevs_discovered": 2, 00:14:37.855 "num_base_bdevs_operational": 2, 00:14:37.855 "base_bdevs_list": [ 00:14:37.855 { 00:14:37.855 "name": null, 00:14:37.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.855 "is_configured": false, 00:14:37.855 "data_offset": 0, 00:14:37.855 "data_size": 63488 00:14:37.855 }, 00:14:37.855 { 00:14:37.855 "name": null, 00:14:37.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.855 "is_configured": false, 00:14:37.855 "data_offset": 2048, 00:14:37.855 "data_size": 63488 00:14:37.855 }, 00:14:37.855 { 00:14:37.855 "name": "BaseBdev3", 00:14:37.855 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:37.855 "is_configured": true, 00:14:37.855 "data_offset": 2048, 00:14:37.855 "data_size": 63488 00:14:37.855 }, 00:14:37.855 { 00:14:37.855 "name": "BaseBdev4", 00:14:37.855 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:37.855 "is_configured": true, 00:14:37.855 "data_offset": 2048, 00:14:37.855 "data_size": 63488 00:14:37.855 } 00:14:37.855 ] 00:14:37.855 }' 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.855 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.427 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.427 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.427 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.427 [2024-10-05 08:51:14.620361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.427 [2024-10-05 08:51:14.620519] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:38.427 [2024-10-05 08:51:14.620534] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:38.427 [2024-10-05 08:51:14.620569] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.427 [2024-10-05 08:51:14.634333] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:38.427 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.427 08:51:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:38.427 [2024-10-05 08:51:14.636102] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.398 "name": "raid_bdev1", 00:14:39.398 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:39.398 "strip_size_kb": 0, 00:14:39.398 "state": "online", 00:14:39.398 "raid_level": "raid1", 00:14:39.398 "superblock": true, 00:14:39.398 "num_base_bdevs": 4, 00:14:39.398 "num_base_bdevs_discovered": 3, 00:14:39.398 "num_base_bdevs_operational": 3, 00:14:39.398 "process": { 00:14:39.398 "type": "rebuild", 00:14:39.398 "target": "spare", 00:14:39.398 "progress": { 00:14:39.398 "blocks": 20480, 00:14:39.398 "percent": 32 00:14:39.398 } 00:14:39.398 }, 00:14:39.398 "base_bdevs_list": [ 00:14:39.398 { 00:14:39.398 "name": "spare", 00:14:39.398 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:39.398 "is_configured": true, 00:14:39.398 "data_offset": 2048, 00:14:39.398 "data_size": 63488 00:14:39.398 }, 00:14:39.398 { 00:14:39.398 "name": null, 00:14:39.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.398 "is_configured": false, 00:14:39.398 "data_offset": 2048, 00:14:39.398 "data_size": 63488 00:14:39.398 }, 00:14:39.398 { 00:14:39.398 "name": "BaseBdev3", 00:14:39.398 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:39.398 "is_configured": true, 00:14:39.398 "data_offset": 2048, 00:14:39.398 "data_size": 63488 00:14:39.398 }, 00:14:39.398 { 00:14:39.398 "name": "BaseBdev4", 00:14:39.398 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:39.398 "is_configured": true, 00:14:39.398 "data_offset": 2048, 00:14:39.398 "data_size": 63488 00:14:39.398 } 00:14:39.398 ] 00:14:39.398 }' 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.398 [2024-10-05 08:51:15.784576] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.398 [2024-10-05 08:51:15.840950] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:39.398 [2024-10-05 08:51:15.841015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.398 [2024-10-05 08:51:15.841050] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.398 [2024-10-05 08:51:15.841057] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:39.398 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.399 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.399 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.399 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.399 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.399 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.399 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.399 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.399 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.399 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.399 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.658 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.658 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.658 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.658 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.658 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.658 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.658 "name": "raid_bdev1", 00:14:39.658 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:39.658 "strip_size_kb": 0, 00:14:39.658 "state": "online", 00:14:39.658 "raid_level": "raid1", 00:14:39.658 "superblock": true, 00:14:39.658 "num_base_bdevs": 4, 00:14:39.658 "num_base_bdevs_discovered": 2, 00:14:39.658 "num_base_bdevs_operational": 2, 00:14:39.658 "base_bdevs_list": [ 00:14:39.658 { 00:14:39.658 "name": null, 00:14:39.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.658 "is_configured": false, 00:14:39.658 "data_offset": 0, 00:14:39.658 "data_size": 63488 00:14:39.658 }, 00:14:39.658 { 00:14:39.658 "name": null, 00:14:39.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.658 "is_configured": false, 00:14:39.658 "data_offset": 2048, 00:14:39.658 "data_size": 63488 00:14:39.658 }, 00:14:39.658 { 00:14:39.658 "name": "BaseBdev3", 00:14:39.658 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:39.658 "is_configured": true, 00:14:39.658 "data_offset": 2048, 00:14:39.658 "data_size": 63488 00:14:39.658 }, 00:14:39.658 { 00:14:39.658 "name": "BaseBdev4", 00:14:39.658 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:39.658 "is_configured": true, 00:14:39.658 "data_offset": 2048, 00:14:39.658 "data_size": 63488 00:14:39.658 } 00:14:39.658 ] 00:14:39.658 }' 00:14:39.658 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.658 08:51:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.919 08:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:39.919 08:51:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.919 08:51:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.919 [2024-10-05 08:51:16.311052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:39.919 [2024-10-05 08:51:16.311104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.919 [2024-10-05 08:51:16.311132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:39.919 [2024-10-05 08:51:16.311141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.919 [2024-10-05 08:51:16.311591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.919 [2024-10-05 08:51:16.311622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:39.919 [2024-10-05 08:51:16.311701] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:39.919 [2024-10-05 08:51:16.311713] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:39.919 [2024-10-05 08:51:16.311727] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:39.919 [2024-10-05 08:51:16.311750] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.919 [2024-10-05 08:51:16.324896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:39.919 spare 00:14:39.919 08:51:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.919 08:51:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:39.919 [2024-10-05 08:51:16.326675] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.299 "name": "raid_bdev1", 00:14:41.299 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:41.299 "strip_size_kb": 0, 00:14:41.299 "state": "online", 00:14:41.299 "raid_level": "raid1", 00:14:41.299 "superblock": true, 00:14:41.299 "num_base_bdevs": 4, 00:14:41.299 "num_base_bdevs_discovered": 3, 00:14:41.299 "num_base_bdevs_operational": 3, 00:14:41.299 "process": { 00:14:41.299 "type": "rebuild", 00:14:41.299 "target": "spare", 00:14:41.299 "progress": { 00:14:41.299 "blocks": 20480, 00:14:41.299 "percent": 32 00:14:41.299 } 00:14:41.299 }, 00:14:41.299 "base_bdevs_list": [ 00:14:41.299 { 00:14:41.299 "name": "spare", 00:14:41.299 "uuid": "9041abc7-f370-5cce-a8f2-db60a8afd392", 00:14:41.299 "is_configured": true, 00:14:41.299 "data_offset": 2048, 00:14:41.299 "data_size": 63488 00:14:41.299 }, 00:14:41.299 { 00:14:41.299 "name": null, 00:14:41.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.299 "is_configured": false, 00:14:41.299 "data_offset": 2048, 00:14:41.299 "data_size": 63488 00:14:41.299 }, 00:14:41.299 { 00:14:41.299 "name": "BaseBdev3", 00:14:41.299 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:41.299 "is_configured": true, 00:14:41.299 "data_offset": 2048, 00:14:41.299 "data_size": 63488 00:14:41.299 }, 00:14:41.299 { 00:14:41.299 "name": "BaseBdev4", 00:14:41.299 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:41.299 "is_configured": true, 00:14:41.299 "data_offset": 2048, 00:14:41.299 "data_size": 63488 00:14:41.299 } 00:14:41.299 ] 00:14:41.299 }' 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.299 [2024-10-05 08:51:17.483538] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.299 [2024-10-05 08:51:17.531380] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:41.299 [2024-10-05 08:51:17.531471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.299 [2024-10-05 08:51:17.531488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.299 [2024-10-05 08:51:17.531500] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.299 "name": "raid_bdev1", 00:14:41.299 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:41.299 "strip_size_kb": 0, 00:14:41.299 "state": "online", 00:14:41.299 "raid_level": "raid1", 00:14:41.299 "superblock": true, 00:14:41.299 "num_base_bdevs": 4, 00:14:41.299 "num_base_bdevs_discovered": 2, 00:14:41.299 "num_base_bdevs_operational": 2, 00:14:41.299 "base_bdevs_list": [ 00:14:41.299 { 00:14:41.299 "name": null, 00:14:41.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.299 "is_configured": false, 00:14:41.299 "data_offset": 0, 00:14:41.299 "data_size": 63488 00:14:41.299 }, 00:14:41.299 { 00:14:41.299 "name": null, 00:14:41.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.299 "is_configured": false, 00:14:41.299 "data_offset": 2048, 00:14:41.299 "data_size": 63488 00:14:41.299 }, 00:14:41.299 { 00:14:41.299 "name": "BaseBdev3", 00:14:41.299 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:41.299 "is_configured": true, 00:14:41.299 "data_offset": 2048, 00:14:41.299 "data_size": 63488 00:14:41.299 }, 00:14:41.299 { 00:14:41.299 "name": "BaseBdev4", 00:14:41.299 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:41.299 "is_configured": true, 00:14:41.299 "data_offset": 2048, 00:14:41.299 "data_size": 63488 00:14:41.299 } 00:14:41.299 ] 00:14:41.299 }' 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.299 08:51:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.867 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.867 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.867 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.868 "name": "raid_bdev1", 00:14:41.868 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:41.868 "strip_size_kb": 0, 00:14:41.868 "state": "online", 00:14:41.868 "raid_level": "raid1", 00:14:41.868 "superblock": true, 00:14:41.868 "num_base_bdevs": 4, 00:14:41.868 "num_base_bdevs_discovered": 2, 00:14:41.868 "num_base_bdevs_operational": 2, 00:14:41.868 "base_bdevs_list": [ 00:14:41.868 { 00:14:41.868 "name": null, 00:14:41.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.868 "is_configured": false, 00:14:41.868 "data_offset": 0, 00:14:41.868 "data_size": 63488 00:14:41.868 }, 00:14:41.868 { 00:14:41.868 "name": null, 00:14:41.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.868 "is_configured": false, 00:14:41.868 "data_offset": 2048, 00:14:41.868 "data_size": 63488 00:14:41.868 }, 00:14:41.868 { 00:14:41.868 "name": "BaseBdev3", 00:14:41.868 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:41.868 "is_configured": true, 00:14:41.868 "data_offset": 2048, 00:14:41.868 "data_size": 63488 00:14:41.868 }, 00:14:41.868 { 00:14:41.868 "name": "BaseBdev4", 00:14:41.868 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:41.868 "is_configured": true, 00:14:41.868 "data_offset": 2048, 00:14:41.868 "data_size": 63488 00:14:41.868 } 00:14:41.868 ] 00:14:41.868 }' 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.868 [2024-10-05 08:51:18.201035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:41.868 [2024-10-05 08:51:18.201088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.868 [2024-10-05 08:51:18.201108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:41.868 [2024-10-05 08:51:18.201117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.868 [2024-10-05 08:51:18.201517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.868 [2024-10-05 08:51:18.201550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:41.868 [2024-10-05 08:51:18.201616] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:41.868 [2024-10-05 08:51:18.201632] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:41.868 [2024-10-05 08:51:18.201641] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:41.868 [2024-10-05 08:51:18.201652] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:41.868 BaseBdev1 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.868 08:51:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.805 "name": "raid_bdev1", 00:14:42.805 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:42.805 "strip_size_kb": 0, 00:14:42.805 "state": "online", 00:14:42.805 "raid_level": "raid1", 00:14:42.805 "superblock": true, 00:14:42.805 "num_base_bdevs": 4, 00:14:42.805 "num_base_bdevs_discovered": 2, 00:14:42.805 "num_base_bdevs_operational": 2, 00:14:42.805 "base_bdevs_list": [ 00:14:42.805 { 00:14:42.805 "name": null, 00:14:42.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.805 "is_configured": false, 00:14:42.805 "data_offset": 0, 00:14:42.805 "data_size": 63488 00:14:42.805 }, 00:14:42.805 { 00:14:42.805 "name": null, 00:14:42.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.805 "is_configured": false, 00:14:42.805 "data_offset": 2048, 00:14:42.805 "data_size": 63488 00:14:42.805 }, 00:14:42.805 { 00:14:42.805 "name": "BaseBdev3", 00:14:42.805 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:42.805 "is_configured": true, 00:14:42.805 "data_offset": 2048, 00:14:42.805 "data_size": 63488 00:14:42.805 }, 00:14:42.805 { 00:14:42.805 "name": "BaseBdev4", 00:14:42.805 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:42.805 "is_configured": true, 00:14:42.805 "data_offset": 2048, 00:14:42.805 "data_size": 63488 00:14:42.805 } 00:14:42.805 ] 00:14:42.805 }' 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.805 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.375 "name": "raid_bdev1", 00:14:43.375 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:43.375 "strip_size_kb": 0, 00:14:43.375 "state": "online", 00:14:43.375 "raid_level": "raid1", 00:14:43.375 "superblock": true, 00:14:43.375 "num_base_bdevs": 4, 00:14:43.375 "num_base_bdevs_discovered": 2, 00:14:43.375 "num_base_bdevs_operational": 2, 00:14:43.375 "base_bdevs_list": [ 00:14:43.375 { 00:14:43.375 "name": null, 00:14:43.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.375 "is_configured": false, 00:14:43.375 "data_offset": 0, 00:14:43.375 "data_size": 63488 00:14:43.375 }, 00:14:43.375 { 00:14:43.375 "name": null, 00:14:43.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.375 "is_configured": false, 00:14:43.375 "data_offset": 2048, 00:14:43.375 "data_size": 63488 00:14:43.375 }, 00:14:43.375 { 00:14:43.375 "name": "BaseBdev3", 00:14:43.375 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:43.375 "is_configured": true, 00:14:43.375 "data_offset": 2048, 00:14:43.375 "data_size": 63488 00:14:43.375 }, 00:14:43.375 { 00:14:43.375 "name": "BaseBdev4", 00:14:43.375 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:43.375 "is_configured": true, 00:14:43.375 "data_offset": 2048, 00:14:43.375 "data_size": 63488 00:14:43.375 } 00:14:43.375 ] 00:14:43.375 }' 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:43.375 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.376 [2024-10-05 08:51:19.794636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.376 [2024-10-05 08:51:19.794768] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:43.376 [2024-10-05 08:51:19.794781] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:43.376 request: 00:14:43.376 { 00:14:43.376 "base_bdev": "BaseBdev1", 00:14:43.376 "raid_bdev": "raid_bdev1", 00:14:43.376 "method": "bdev_raid_add_base_bdev", 00:14:43.376 "req_id": 1 00:14:43.376 } 00:14:43.376 Got JSON-RPC error response 00:14:43.376 response: 00:14:43.376 { 00:14:43.376 "code": -22, 00:14:43.376 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:43.376 } 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.376 08:51:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.754 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.754 "name": "raid_bdev1", 00:14:44.754 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:44.754 "strip_size_kb": 0, 00:14:44.754 "state": "online", 00:14:44.754 "raid_level": "raid1", 00:14:44.754 "superblock": true, 00:14:44.754 "num_base_bdevs": 4, 00:14:44.754 "num_base_bdevs_discovered": 2, 00:14:44.754 "num_base_bdevs_operational": 2, 00:14:44.754 "base_bdevs_list": [ 00:14:44.754 { 00:14:44.754 "name": null, 00:14:44.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.754 "is_configured": false, 00:14:44.754 "data_offset": 0, 00:14:44.754 "data_size": 63488 00:14:44.754 }, 00:14:44.754 { 00:14:44.754 "name": null, 00:14:44.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.754 "is_configured": false, 00:14:44.754 "data_offset": 2048, 00:14:44.754 "data_size": 63488 00:14:44.754 }, 00:14:44.754 { 00:14:44.754 "name": "BaseBdev3", 00:14:44.754 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:44.754 "is_configured": true, 00:14:44.754 "data_offset": 2048, 00:14:44.754 "data_size": 63488 00:14:44.754 }, 00:14:44.755 { 00:14:44.755 "name": "BaseBdev4", 00:14:44.755 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:44.755 "is_configured": true, 00:14:44.755 "data_offset": 2048, 00:14:44.755 "data_size": 63488 00:14:44.755 } 00:14:44.755 ] 00:14:44.755 }' 00:14:44.755 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.755 08:51:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.014 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.014 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.014 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.014 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.014 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.014 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.014 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.014 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.014 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.014 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.014 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.014 "name": "raid_bdev1", 00:14:45.014 "uuid": "71db00c8-1d15-44a7-96b6-d4da7df75c0a", 00:14:45.014 "strip_size_kb": 0, 00:14:45.014 "state": "online", 00:14:45.014 "raid_level": "raid1", 00:14:45.014 "superblock": true, 00:14:45.014 "num_base_bdevs": 4, 00:14:45.014 "num_base_bdevs_discovered": 2, 00:14:45.014 "num_base_bdevs_operational": 2, 00:14:45.014 "base_bdevs_list": [ 00:14:45.015 { 00:14:45.015 "name": null, 00:14:45.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.015 "is_configured": false, 00:14:45.015 "data_offset": 0, 00:14:45.015 "data_size": 63488 00:14:45.015 }, 00:14:45.015 { 00:14:45.015 "name": null, 00:14:45.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.015 "is_configured": false, 00:14:45.015 "data_offset": 2048, 00:14:45.015 "data_size": 63488 00:14:45.015 }, 00:14:45.015 { 00:14:45.015 "name": "BaseBdev3", 00:14:45.015 "uuid": "d81ffc92-1e80-5650-816a-0da6b49a29f5", 00:14:45.015 "is_configured": true, 00:14:45.015 "data_offset": 2048, 00:14:45.015 "data_size": 63488 00:14:45.015 }, 00:14:45.015 { 00:14:45.015 "name": "BaseBdev4", 00:14:45.015 "uuid": "73beaa81-3c62-54f1-9d70-a19dcdc25b7f", 00:14:45.015 "is_configured": true, 00:14:45.015 "data_offset": 2048, 00:14:45.015 "data_size": 63488 00:14:45.015 } 00:14:45.015 ] 00:14:45.015 }' 00:14:45.015 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.015 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.015 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.015 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.015 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76542 00:14:45.015 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 76542 ']' 00:14:45.015 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 76542 00:14:45.015 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:14:45.015 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:45.015 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76542 00:14:45.015 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:45.015 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:45.274 killing process with pid 76542 00:14:45.274 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76542' 00:14:45.274 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 76542 00:14:45.274 Received shutdown signal, test time was about 17.902324 seconds 00:14:45.274 00:14:45.274 Latency(us) 00:14:45.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.274 =================================================================================================================== 00:14:45.274 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:45.274 [2024-10-05 08:51:21.485882] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.274 [2024-10-05 08:51:21.486013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.275 [2024-10-05 08:51:21.486087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.275 08:51:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 76542 00:14:45.275 [2024-10-05 08:51:21.486100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:45.534 [2024-10-05 08:51:21.877322] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.916 08:51:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:46.916 00:14:46.916 real 0m21.331s 00:14:46.916 user 0m27.962s 00:14:46.916 sys 0m2.648s 00:14:46.916 08:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:46.916 08:51:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.916 ************************************ 00:14:46.916 END TEST raid_rebuild_test_sb_io 00:14:46.916 ************************************ 00:14:46.916 08:51:23 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:46.916 08:51:23 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:46.916 08:51:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:46.916 08:51:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:46.916 08:51:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.916 ************************************ 00:14:46.916 START TEST raid5f_state_function_test 00:14:46.916 ************************************ 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=77139 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77139' 00:14:46.916 Process raid pid: 77139 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 77139 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 77139 ']' 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.916 08:51:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.916 [2024-10-05 08:51:23.311710] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:14:46.916 [2024-10-05 08:51:23.311866] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.176 [2024-10-05 08:51:23.481640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.435 [2024-10-05 08:51:23.681506] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.435 [2024-10-05 08:51:23.861835] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.435 [2024-10-05 08:51:23.861874] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.695 [2024-10-05 08:51:24.116511] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.695 [2024-10-05 08:51:24.116569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.695 [2024-10-05 08:51:24.116579] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.695 [2024-10-05 08:51:24.116587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.695 [2024-10-05 08:51:24.116593] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:47.695 [2024-10-05 08:51:24.116601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.695 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.954 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.954 "name": "Existed_Raid", 00:14:47.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.954 "strip_size_kb": 64, 00:14:47.954 "state": "configuring", 00:14:47.954 "raid_level": "raid5f", 00:14:47.954 "superblock": false, 00:14:47.954 "num_base_bdevs": 3, 00:14:47.954 "num_base_bdevs_discovered": 0, 00:14:47.954 "num_base_bdevs_operational": 3, 00:14:47.954 "base_bdevs_list": [ 00:14:47.954 { 00:14:47.954 "name": "BaseBdev1", 00:14:47.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.954 "is_configured": false, 00:14:47.954 "data_offset": 0, 00:14:47.954 "data_size": 0 00:14:47.954 }, 00:14:47.954 { 00:14:47.954 "name": "BaseBdev2", 00:14:47.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.954 "is_configured": false, 00:14:47.954 "data_offset": 0, 00:14:47.954 "data_size": 0 00:14:47.954 }, 00:14:47.954 { 00:14:47.954 "name": "BaseBdev3", 00:14:47.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.954 "is_configured": false, 00:14:47.954 "data_offset": 0, 00:14:47.954 "data_size": 0 00:14:47.954 } 00:14:47.954 ] 00:14:47.954 }' 00:14:47.954 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.954 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.214 [2024-10-05 08:51:24.587651] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.214 [2024-10-05 08:51:24.587693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.214 [2024-10-05 08:51:24.599652] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.214 [2024-10-05 08:51:24.599695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.214 [2024-10-05 08:51:24.599703] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.214 [2024-10-05 08:51:24.599712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.214 [2024-10-05 08:51:24.599718] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:48.214 [2024-10-05 08:51:24.599726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.214 [2024-10-05 08:51:24.681385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.214 BaseBdev1 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.214 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.475 [ 00:14:48.475 { 00:14:48.475 "name": "BaseBdev1", 00:14:48.475 "aliases": [ 00:14:48.475 "3af87558-c518-4ccb-960e-134576e43a04" 00:14:48.475 ], 00:14:48.475 "product_name": "Malloc disk", 00:14:48.475 "block_size": 512, 00:14:48.475 "num_blocks": 65536, 00:14:48.475 "uuid": "3af87558-c518-4ccb-960e-134576e43a04", 00:14:48.475 "assigned_rate_limits": { 00:14:48.475 "rw_ios_per_sec": 0, 00:14:48.475 "rw_mbytes_per_sec": 0, 00:14:48.475 "r_mbytes_per_sec": 0, 00:14:48.475 "w_mbytes_per_sec": 0 00:14:48.475 }, 00:14:48.475 "claimed": true, 00:14:48.475 "claim_type": "exclusive_write", 00:14:48.475 "zoned": false, 00:14:48.475 "supported_io_types": { 00:14:48.475 "read": true, 00:14:48.475 "write": true, 00:14:48.475 "unmap": true, 00:14:48.475 "flush": true, 00:14:48.475 "reset": true, 00:14:48.475 "nvme_admin": false, 00:14:48.475 "nvme_io": false, 00:14:48.475 "nvme_io_md": false, 00:14:48.475 "write_zeroes": true, 00:14:48.475 "zcopy": true, 00:14:48.475 "get_zone_info": false, 00:14:48.475 "zone_management": false, 00:14:48.475 "zone_append": false, 00:14:48.475 "compare": false, 00:14:48.475 "compare_and_write": false, 00:14:48.475 "abort": true, 00:14:48.475 "seek_hole": false, 00:14:48.475 "seek_data": false, 00:14:48.475 "copy": true, 00:14:48.475 "nvme_iov_md": false 00:14:48.475 }, 00:14:48.475 "memory_domains": [ 00:14:48.475 { 00:14:48.475 "dma_device_id": "system", 00:14:48.475 "dma_device_type": 1 00:14:48.475 }, 00:14:48.475 { 00:14:48.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.475 "dma_device_type": 2 00:14:48.475 } 00:14:48.475 ], 00:14:48.475 "driver_specific": {} 00:14:48.475 } 00:14:48.475 ] 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.475 "name": "Existed_Raid", 00:14:48.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.475 "strip_size_kb": 64, 00:14:48.475 "state": "configuring", 00:14:48.475 "raid_level": "raid5f", 00:14:48.475 "superblock": false, 00:14:48.475 "num_base_bdevs": 3, 00:14:48.475 "num_base_bdevs_discovered": 1, 00:14:48.475 "num_base_bdevs_operational": 3, 00:14:48.475 "base_bdevs_list": [ 00:14:48.475 { 00:14:48.475 "name": "BaseBdev1", 00:14:48.475 "uuid": "3af87558-c518-4ccb-960e-134576e43a04", 00:14:48.475 "is_configured": true, 00:14:48.475 "data_offset": 0, 00:14:48.475 "data_size": 65536 00:14:48.475 }, 00:14:48.475 { 00:14:48.475 "name": "BaseBdev2", 00:14:48.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.475 "is_configured": false, 00:14:48.475 "data_offset": 0, 00:14:48.475 "data_size": 0 00:14:48.475 }, 00:14:48.475 { 00:14:48.475 "name": "BaseBdev3", 00:14:48.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.475 "is_configured": false, 00:14:48.475 "data_offset": 0, 00:14:48.475 "data_size": 0 00:14:48.475 } 00:14:48.475 ] 00:14:48.475 }' 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.475 08:51:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.045 [2024-10-05 08:51:25.217061] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.045 [2024-10-05 08:51:25.217102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.045 [2024-10-05 08:51:25.229108] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.045 [2024-10-05 08:51:25.230826] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.045 [2024-10-05 08:51:25.230871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.045 [2024-10-05 08:51:25.230880] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:49.045 [2024-10-05 08:51:25.230888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.045 "name": "Existed_Raid", 00:14:49.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.045 "strip_size_kb": 64, 00:14:49.045 "state": "configuring", 00:14:49.045 "raid_level": "raid5f", 00:14:49.045 "superblock": false, 00:14:49.045 "num_base_bdevs": 3, 00:14:49.045 "num_base_bdevs_discovered": 1, 00:14:49.045 "num_base_bdevs_operational": 3, 00:14:49.045 "base_bdevs_list": [ 00:14:49.045 { 00:14:49.045 "name": "BaseBdev1", 00:14:49.045 "uuid": "3af87558-c518-4ccb-960e-134576e43a04", 00:14:49.045 "is_configured": true, 00:14:49.045 "data_offset": 0, 00:14:49.045 "data_size": 65536 00:14:49.045 }, 00:14:49.045 { 00:14:49.045 "name": "BaseBdev2", 00:14:49.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.045 "is_configured": false, 00:14:49.045 "data_offset": 0, 00:14:49.045 "data_size": 0 00:14:49.045 }, 00:14:49.045 { 00:14:49.045 "name": "BaseBdev3", 00:14:49.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.045 "is_configured": false, 00:14:49.045 "data_offset": 0, 00:14:49.045 "data_size": 0 00:14:49.045 } 00:14:49.045 ] 00:14:49.045 }' 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.045 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.304 [2024-10-05 08:51:25.760515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.304 BaseBdev2 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.304 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.563 [ 00:14:49.563 { 00:14:49.563 "name": "BaseBdev2", 00:14:49.563 "aliases": [ 00:14:49.563 "73723149-1f30-40d9-9c78-c75515a78349" 00:14:49.563 ], 00:14:49.563 "product_name": "Malloc disk", 00:14:49.563 "block_size": 512, 00:14:49.563 "num_blocks": 65536, 00:14:49.563 "uuid": "73723149-1f30-40d9-9c78-c75515a78349", 00:14:49.563 "assigned_rate_limits": { 00:14:49.563 "rw_ios_per_sec": 0, 00:14:49.563 "rw_mbytes_per_sec": 0, 00:14:49.563 "r_mbytes_per_sec": 0, 00:14:49.563 "w_mbytes_per_sec": 0 00:14:49.563 }, 00:14:49.563 "claimed": true, 00:14:49.563 "claim_type": "exclusive_write", 00:14:49.563 "zoned": false, 00:14:49.563 "supported_io_types": { 00:14:49.563 "read": true, 00:14:49.563 "write": true, 00:14:49.563 "unmap": true, 00:14:49.563 "flush": true, 00:14:49.563 "reset": true, 00:14:49.563 "nvme_admin": false, 00:14:49.563 "nvme_io": false, 00:14:49.563 "nvme_io_md": false, 00:14:49.563 "write_zeroes": true, 00:14:49.563 "zcopy": true, 00:14:49.563 "get_zone_info": false, 00:14:49.563 "zone_management": false, 00:14:49.563 "zone_append": false, 00:14:49.563 "compare": false, 00:14:49.563 "compare_and_write": false, 00:14:49.563 "abort": true, 00:14:49.563 "seek_hole": false, 00:14:49.563 "seek_data": false, 00:14:49.563 "copy": true, 00:14:49.563 "nvme_iov_md": false 00:14:49.563 }, 00:14:49.563 "memory_domains": [ 00:14:49.564 { 00:14:49.564 "dma_device_id": "system", 00:14:49.564 "dma_device_type": 1 00:14:49.564 }, 00:14:49.564 { 00:14:49.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.564 "dma_device_type": 2 00:14:49.564 } 00:14:49.564 ], 00:14:49.564 "driver_specific": {} 00:14:49.564 } 00:14:49.564 ] 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.564 "name": "Existed_Raid", 00:14:49.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.564 "strip_size_kb": 64, 00:14:49.564 "state": "configuring", 00:14:49.564 "raid_level": "raid5f", 00:14:49.564 "superblock": false, 00:14:49.564 "num_base_bdevs": 3, 00:14:49.564 "num_base_bdevs_discovered": 2, 00:14:49.564 "num_base_bdevs_operational": 3, 00:14:49.564 "base_bdevs_list": [ 00:14:49.564 { 00:14:49.564 "name": "BaseBdev1", 00:14:49.564 "uuid": "3af87558-c518-4ccb-960e-134576e43a04", 00:14:49.564 "is_configured": true, 00:14:49.564 "data_offset": 0, 00:14:49.564 "data_size": 65536 00:14:49.564 }, 00:14:49.564 { 00:14:49.564 "name": "BaseBdev2", 00:14:49.564 "uuid": "73723149-1f30-40d9-9c78-c75515a78349", 00:14:49.564 "is_configured": true, 00:14:49.564 "data_offset": 0, 00:14:49.564 "data_size": 65536 00:14:49.564 }, 00:14:49.564 { 00:14:49.564 "name": "BaseBdev3", 00:14:49.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.564 "is_configured": false, 00:14:49.564 "data_offset": 0, 00:14:49.564 "data_size": 0 00:14:49.564 } 00:14:49.564 ] 00:14:49.564 }' 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.564 08:51:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.823 [2024-10-05 08:51:26.280889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.823 [2024-10-05 08:51:26.280944] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:49.823 [2024-10-05 08:51:26.280998] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:49.823 [2024-10-05 08:51:26.281257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:49.823 [2024-10-05 08:51:26.286471] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:49.823 [2024-10-05 08:51:26.286502] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:49.823 [2024-10-05 08:51:26.286750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.823 BaseBdev3 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.823 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.083 [ 00:14:50.083 { 00:14:50.083 "name": "BaseBdev3", 00:14:50.083 "aliases": [ 00:14:50.083 "e151ff2e-8fbb-4130-ad8b-276885cc0fde" 00:14:50.083 ], 00:14:50.083 "product_name": "Malloc disk", 00:14:50.083 "block_size": 512, 00:14:50.083 "num_blocks": 65536, 00:14:50.083 "uuid": "e151ff2e-8fbb-4130-ad8b-276885cc0fde", 00:14:50.083 "assigned_rate_limits": { 00:14:50.083 "rw_ios_per_sec": 0, 00:14:50.083 "rw_mbytes_per_sec": 0, 00:14:50.083 "r_mbytes_per_sec": 0, 00:14:50.083 "w_mbytes_per_sec": 0 00:14:50.083 }, 00:14:50.083 "claimed": true, 00:14:50.083 "claim_type": "exclusive_write", 00:14:50.083 "zoned": false, 00:14:50.083 "supported_io_types": { 00:14:50.083 "read": true, 00:14:50.083 "write": true, 00:14:50.083 "unmap": true, 00:14:50.083 "flush": true, 00:14:50.083 "reset": true, 00:14:50.083 "nvme_admin": false, 00:14:50.083 "nvme_io": false, 00:14:50.083 "nvme_io_md": false, 00:14:50.083 "write_zeroes": true, 00:14:50.083 "zcopy": true, 00:14:50.083 "get_zone_info": false, 00:14:50.083 "zone_management": false, 00:14:50.083 "zone_append": false, 00:14:50.083 "compare": false, 00:14:50.083 "compare_and_write": false, 00:14:50.083 "abort": true, 00:14:50.083 "seek_hole": false, 00:14:50.083 "seek_data": false, 00:14:50.083 "copy": true, 00:14:50.083 "nvme_iov_md": false 00:14:50.083 }, 00:14:50.083 "memory_domains": [ 00:14:50.083 { 00:14:50.083 "dma_device_id": "system", 00:14:50.083 "dma_device_type": 1 00:14:50.083 }, 00:14:50.083 { 00:14:50.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.083 "dma_device_type": 2 00:14:50.083 } 00:14:50.083 ], 00:14:50.083 "driver_specific": {} 00:14:50.083 } 00:14:50.083 ] 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.083 "name": "Existed_Raid", 00:14:50.083 "uuid": "f08afe0b-ca92-474b-b9f8-d5b5383fdbfa", 00:14:50.083 "strip_size_kb": 64, 00:14:50.083 "state": "online", 00:14:50.083 "raid_level": "raid5f", 00:14:50.083 "superblock": false, 00:14:50.083 "num_base_bdevs": 3, 00:14:50.083 "num_base_bdevs_discovered": 3, 00:14:50.083 "num_base_bdevs_operational": 3, 00:14:50.083 "base_bdevs_list": [ 00:14:50.083 { 00:14:50.083 "name": "BaseBdev1", 00:14:50.083 "uuid": "3af87558-c518-4ccb-960e-134576e43a04", 00:14:50.083 "is_configured": true, 00:14:50.083 "data_offset": 0, 00:14:50.083 "data_size": 65536 00:14:50.083 }, 00:14:50.083 { 00:14:50.083 "name": "BaseBdev2", 00:14:50.083 "uuid": "73723149-1f30-40d9-9c78-c75515a78349", 00:14:50.083 "is_configured": true, 00:14:50.083 "data_offset": 0, 00:14:50.083 "data_size": 65536 00:14:50.083 }, 00:14:50.083 { 00:14:50.083 "name": "BaseBdev3", 00:14:50.083 "uuid": "e151ff2e-8fbb-4130-ad8b-276885cc0fde", 00:14:50.083 "is_configured": true, 00:14:50.083 "data_offset": 0, 00:14:50.083 "data_size": 65536 00:14:50.083 } 00:14:50.083 ] 00:14:50.083 }' 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.083 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.652 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:50.652 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:50.652 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:50.652 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.653 [2024-10-05 08:51:26.839969] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:50.653 "name": "Existed_Raid", 00:14:50.653 "aliases": [ 00:14:50.653 "f08afe0b-ca92-474b-b9f8-d5b5383fdbfa" 00:14:50.653 ], 00:14:50.653 "product_name": "Raid Volume", 00:14:50.653 "block_size": 512, 00:14:50.653 "num_blocks": 131072, 00:14:50.653 "uuid": "f08afe0b-ca92-474b-b9f8-d5b5383fdbfa", 00:14:50.653 "assigned_rate_limits": { 00:14:50.653 "rw_ios_per_sec": 0, 00:14:50.653 "rw_mbytes_per_sec": 0, 00:14:50.653 "r_mbytes_per_sec": 0, 00:14:50.653 "w_mbytes_per_sec": 0 00:14:50.653 }, 00:14:50.653 "claimed": false, 00:14:50.653 "zoned": false, 00:14:50.653 "supported_io_types": { 00:14:50.653 "read": true, 00:14:50.653 "write": true, 00:14:50.653 "unmap": false, 00:14:50.653 "flush": false, 00:14:50.653 "reset": true, 00:14:50.653 "nvme_admin": false, 00:14:50.653 "nvme_io": false, 00:14:50.653 "nvme_io_md": false, 00:14:50.653 "write_zeroes": true, 00:14:50.653 "zcopy": false, 00:14:50.653 "get_zone_info": false, 00:14:50.653 "zone_management": false, 00:14:50.653 "zone_append": false, 00:14:50.653 "compare": false, 00:14:50.653 "compare_and_write": false, 00:14:50.653 "abort": false, 00:14:50.653 "seek_hole": false, 00:14:50.653 "seek_data": false, 00:14:50.653 "copy": false, 00:14:50.653 "nvme_iov_md": false 00:14:50.653 }, 00:14:50.653 "driver_specific": { 00:14:50.653 "raid": { 00:14:50.653 "uuid": "f08afe0b-ca92-474b-b9f8-d5b5383fdbfa", 00:14:50.653 "strip_size_kb": 64, 00:14:50.653 "state": "online", 00:14:50.653 "raid_level": "raid5f", 00:14:50.653 "superblock": false, 00:14:50.653 "num_base_bdevs": 3, 00:14:50.653 "num_base_bdevs_discovered": 3, 00:14:50.653 "num_base_bdevs_operational": 3, 00:14:50.653 "base_bdevs_list": [ 00:14:50.653 { 00:14:50.653 "name": "BaseBdev1", 00:14:50.653 "uuid": "3af87558-c518-4ccb-960e-134576e43a04", 00:14:50.653 "is_configured": true, 00:14:50.653 "data_offset": 0, 00:14:50.653 "data_size": 65536 00:14:50.653 }, 00:14:50.653 { 00:14:50.653 "name": "BaseBdev2", 00:14:50.653 "uuid": "73723149-1f30-40d9-9c78-c75515a78349", 00:14:50.653 "is_configured": true, 00:14:50.653 "data_offset": 0, 00:14:50.653 "data_size": 65536 00:14:50.653 }, 00:14:50.653 { 00:14:50.653 "name": "BaseBdev3", 00:14:50.653 "uuid": "e151ff2e-8fbb-4130-ad8b-276885cc0fde", 00:14:50.653 "is_configured": true, 00:14:50.653 "data_offset": 0, 00:14:50.653 "data_size": 65536 00:14:50.653 } 00:14:50.653 ] 00:14:50.653 } 00:14:50.653 } 00:14:50.653 }' 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:50.653 BaseBdev2 00:14:50.653 BaseBdev3' 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.653 08:51:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.653 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.653 [2024-10-05 08:51:27.107326] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.912 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.913 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.913 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.913 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.913 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.913 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.913 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.913 "name": "Existed_Raid", 00:14:50.913 "uuid": "f08afe0b-ca92-474b-b9f8-d5b5383fdbfa", 00:14:50.913 "strip_size_kb": 64, 00:14:50.913 "state": "online", 00:14:50.913 "raid_level": "raid5f", 00:14:50.913 "superblock": false, 00:14:50.913 "num_base_bdevs": 3, 00:14:50.913 "num_base_bdevs_discovered": 2, 00:14:50.913 "num_base_bdevs_operational": 2, 00:14:50.913 "base_bdevs_list": [ 00:14:50.913 { 00:14:50.913 "name": null, 00:14:50.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.913 "is_configured": false, 00:14:50.913 "data_offset": 0, 00:14:50.913 "data_size": 65536 00:14:50.913 }, 00:14:50.913 { 00:14:50.913 "name": "BaseBdev2", 00:14:50.913 "uuid": "73723149-1f30-40d9-9c78-c75515a78349", 00:14:50.913 "is_configured": true, 00:14:50.913 "data_offset": 0, 00:14:50.913 "data_size": 65536 00:14:50.913 }, 00:14:50.913 { 00:14:50.913 "name": "BaseBdev3", 00:14:50.913 "uuid": "e151ff2e-8fbb-4130-ad8b-276885cc0fde", 00:14:50.913 "is_configured": true, 00:14:50.913 "data_offset": 0, 00:14:50.913 "data_size": 65536 00:14:50.913 } 00:14:50.913 ] 00:14:50.913 }' 00:14:50.913 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.913 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.482 [2024-10-05 08:51:27.733112] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.482 [2024-10-05 08:51:27.733214] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.482 [2024-10-05 08:51:27.821590] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.482 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.482 [2024-10-05 08:51:27.885479] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:51.482 [2024-10-05 08:51:27.885532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:51.742 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.742 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:51.742 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:51.742 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.742 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.742 08:51:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:51.742 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.742 08:51:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.742 BaseBdev2 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.742 [ 00:14:51.742 { 00:14:51.742 "name": "BaseBdev2", 00:14:51.742 "aliases": [ 00:14:51.742 "fad31194-af83-458d-83fc-9eb2b5ead9f2" 00:14:51.742 ], 00:14:51.742 "product_name": "Malloc disk", 00:14:51.742 "block_size": 512, 00:14:51.742 "num_blocks": 65536, 00:14:51.742 "uuid": "fad31194-af83-458d-83fc-9eb2b5ead9f2", 00:14:51.742 "assigned_rate_limits": { 00:14:51.742 "rw_ios_per_sec": 0, 00:14:51.742 "rw_mbytes_per_sec": 0, 00:14:51.742 "r_mbytes_per_sec": 0, 00:14:51.742 "w_mbytes_per_sec": 0 00:14:51.742 }, 00:14:51.742 "claimed": false, 00:14:51.742 "zoned": false, 00:14:51.742 "supported_io_types": { 00:14:51.742 "read": true, 00:14:51.742 "write": true, 00:14:51.742 "unmap": true, 00:14:51.742 "flush": true, 00:14:51.742 "reset": true, 00:14:51.742 "nvme_admin": false, 00:14:51.742 "nvme_io": false, 00:14:51.742 "nvme_io_md": false, 00:14:51.742 "write_zeroes": true, 00:14:51.742 "zcopy": true, 00:14:51.742 "get_zone_info": false, 00:14:51.742 "zone_management": false, 00:14:51.742 "zone_append": false, 00:14:51.742 "compare": false, 00:14:51.742 "compare_and_write": false, 00:14:51.742 "abort": true, 00:14:51.742 "seek_hole": false, 00:14:51.742 "seek_data": false, 00:14:51.742 "copy": true, 00:14:51.742 "nvme_iov_md": false 00:14:51.742 }, 00:14:51.742 "memory_domains": [ 00:14:51.742 { 00:14:51.742 "dma_device_id": "system", 00:14:51.742 "dma_device_type": 1 00:14:51.742 }, 00:14:51.742 { 00:14:51.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.742 "dma_device_type": 2 00:14:51.742 } 00:14:51.742 ], 00:14:51.742 "driver_specific": {} 00:14:51.742 } 00:14:51.742 ] 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.742 BaseBdev3 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:51.742 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 [ 00:14:51.743 { 00:14:51.743 "name": "BaseBdev3", 00:14:51.743 "aliases": [ 00:14:51.743 "1a0cd151-e4a9-4514-84d1-b42c1aa340a7" 00:14:51.743 ], 00:14:51.743 "product_name": "Malloc disk", 00:14:51.743 "block_size": 512, 00:14:51.743 "num_blocks": 65536, 00:14:51.743 "uuid": "1a0cd151-e4a9-4514-84d1-b42c1aa340a7", 00:14:51.743 "assigned_rate_limits": { 00:14:51.743 "rw_ios_per_sec": 0, 00:14:51.743 "rw_mbytes_per_sec": 0, 00:14:51.743 "r_mbytes_per_sec": 0, 00:14:51.743 "w_mbytes_per_sec": 0 00:14:51.743 }, 00:14:51.743 "claimed": false, 00:14:51.743 "zoned": false, 00:14:51.743 "supported_io_types": { 00:14:51.743 "read": true, 00:14:51.743 "write": true, 00:14:51.743 "unmap": true, 00:14:51.743 "flush": true, 00:14:51.743 "reset": true, 00:14:51.743 "nvme_admin": false, 00:14:51.743 "nvme_io": false, 00:14:51.743 "nvme_io_md": false, 00:14:51.743 "write_zeroes": true, 00:14:51.743 "zcopy": true, 00:14:51.743 "get_zone_info": false, 00:14:51.743 "zone_management": false, 00:14:51.743 "zone_append": false, 00:14:51.743 "compare": false, 00:14:51.743 "compare_and_write": false, 00:14:51.743 "abort": true, 00:14:51.743 "seek_hole": false, 00:14:51.743 "seek_data": false, 00:14:51.743 "copy": true, 00:14:51.743 "nvme_iov_md": false 00:14:51.743 }, 00:14:51.743 "memory_domains": [ 00:14:51.743 { 00:14:51.743 "dma_device_id": "system", 00:14:51.743 "dma_device_type": 1 00:14:51.743 }, 00:14:51.743 { 00:14:51.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.743 "dma_device_type": 2 00:14:51.743 } 00:14:51.743 ], 00:14:51.743 "driver_specific": {} 00:14:51.743 } 00:14:51.743 ] 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.743 [2024-10-05 08:51:28.187795] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:51.743 [2024-10-05 08:51:28.187845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:51.743 [2024-10-05 08:51:28.187865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.743 [2024-10-05 08:51:28.189529] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.743 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.009 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.009 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.009 "name": "Existed_Raid", 00:14:52.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.009 "strip_size_kb": 64, 00:14:52.009 "state": "configuring", 00:14:52.009 "raid_level": "raid5f", 00:14:52.009 "superblock": false, 00:14:52.009 "num_base_bdevs": 3, 00:14:52.009 "num_base_bdevs_discovered": 2, 00:14:52.009 "num_base_bdevs_operational": 3, 00:14:52.009 "base_bdevs_list": [ 00:14:52.009 { 00:14:52.009 "name": "BaseBdev1", 00:14:52.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.009 "is_configured": false, 00:14:52.009 "data_offset": 0, 00:14:52.009 "data_size": 0 00:14:52.009 }, 00:14:52.009 { 00:14:52.009 "name": "BaseBdev2", 00:14:52.009 "uuid": "fad31194-af83-458d-83fc-9eb2b5ead9f2", 00:14:52.009 "is_configured": true, 00:14:52.009 "data_offset": 0, 00:14:52.009 "data_size": 65536 00:14:52.009 }, 00:14:52.009 { 00:14:52.009 "name": "BaseBdev3", 00:14:52.009 "uuid": "1a0cd151-e4a9-4514-84d1-b42c1aa340a7", 00:14:52.009 "is_configured": true, 00:14:52.009 "data_offset": 0, 00:14:52.009 "data_size": 65536 00:14:52.009 } 00:14:52.009 ] 00:14:52.009 }' 00:14:52.009 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.009 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.269 [2024-10-05 08:51:28.662952] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.269 "name": "Existed_Raid", 00:14:52.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.269 "strip_size_kb": 64, 00:14:52.269 "state": "configuring", 00:14:52.269 "raid_level": "raid5f", 00:14:52.269 "superblock": false, 00:14:52.269 "num_base_bdevs": 3, 00:14:52.269 "num_base_bdevs_discovered": 1, 00:14:52.269 "num_base_bdevs_operational": 3, 00:14:52.269 "base_bdevs_list": [ 00:14:52.269 { 00:14:52.269 "name": "BaseBdev1", 00:14:52.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.269 "is_configured": false, 00:14:52.269 "data_offset": 0, 00:14:52.269 "data_size": 0 00:14:52.269 }, 00:14:52.269 { 00:14:52.269 "name": null, 00:14:52.269 "uuid": "fad31194-af83-458d-83fc-9eb2b5ead9f2", 00:14:52.269 "is_configured": false, 00:14:52.269 "data_offset": 0, 00:14:52.269 "data_size": 65536 00:14:52.269 }, 00:14:52.269 { 00:14:52.269 "name": "BaseBdev3", 00:14:52.269 "uuid": "1a0cd151-e4a9-4514-84d1-b42c1aa340a7", 00:14:52.269 "is_configured": true, 00:14:52.269 "data_offset": 0, 00:14:52.269 "data_size": 65536 00:14:52.269 } 00:14:52.269 ] 00:14:52.269 }' 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.269 08:51:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.838 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.838 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.838 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:52.838 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.838 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.838 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.839 [2024-10-05 08:51:29.257344] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.839 BaseBdev1 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.839 [ 00:14:52.839 { 00:14:52.839 "name": "BaseBdev1", 00:14:52.839 "aliases": [ 00:14:52.839 "286c3c5d-b2c6-4d28-86b2-2af0d5992719" 00:14:52.839 ], 00:14:52.839 "product_name": "Malloc disk", 00:14:52.839 "block_size": 512, 00:14:52.839 "num_blocks": 65536, 00:14:52.839 "uuid": "286c3c5d-b2c6-4d28-86b2-2af0d5992719", 00:14:52.839 "assigned_rate_limits": { 00:14:52.839 "rw_ios_per_sec": 0, 00:14:52.839 "rw_mbytes_per_sec": 0, 00:14:52.839 "r_mbytes_per_sec": 0, 00:14:52.839 "w_mbytes_per_sec": 0 00:14:52.839 }, 00:14:52.839 "claimed": true, 00:14:52.839 "claim_type": "exclusive_write", 00:14:52.839 "zoned": false, 00:14:52.839 "supported_io_types": { 00:14:52.839 "read": true, 00:14:52.839 "write": true, 00:14:52.839 "unmap": true, 00:14:52.839 "flush": true, 00:14:52.839 "reset": true, 00:14:52.839 "nvme_admin": false, 00:14:52.839 "nvme_io": false, 00:14:52.839 "nvme_io_md": false, 00:14:52.839 "write_zeroes": true, 00:14:52.839 "zcopy": true, 00:14:52.839 "get_zone_info": false, 00:14:52.839 "zone_management": false, 00:14:52.839 "zone_append": false, 00:14:52.839 "compare": false, 00:14:52.839 "compare_and_write": false, 00:14:52.839 "abort": true, 00:14:52.839 "seek_hole": false, 00:14:52.839 "seek_data": false, 00:14:52.839 "copy": true, 00:14:52.839 "nvme_iov_md": false 00:14:52.839 }, 00:14:52.839 "memory_domains": [ 00:14:52.839 { 00:14:52.839 "dma_device_id": "system", 00:14:52.839 "dma_device_type": 1 00:14:52.839 }, 00:14:52.839 { 00:14:52.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.839 "dma_device_type": 2 00:14:52.839 } 00:14:52.839 ], 00:14:52.839 "driver_specific": {} 00:14:52.839 } 00:14:52.839 ] 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.839 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.099 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.099 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.099 "name": "Existed_Raid", 00:14:53.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.099 "strip_size_kb": 64, 00:14:53.099 "state": "configuring", 00:14:53.099 "raid_level": "raid5f", 00:14:53.099 "superblock": false, 00:14:53.099 "num_base_bdevs": 3, 00:14:53.099 "num_base_bdevs_discovered": 2, 00:14:53.099 "num_base_bdevs_operational": 3, 00:14:53.099 "base_bdevs_list": [ 00:14:53.099 { 00:14:53.099 "name": "BaseBdev1", 00:14:53.099 "uuid": "286c3c5d-b2c6-4d28-86b2-2af0d5992719", 00:14:53.099 "is_configured": true, 00:14:53.099 "data_offset": 0, 00:14:53.099 "data_size": 65536 00:14:53.099 }, 00:14:53.099 { 00:14:53.099 "name": null, 00:14:53.099 "uuid": "fad31194-af83-458d-83fc-9eb2b5ead9f2", 00:14:53.099 "is_configured": false, 00:14:53.099 "data_offset": 0, 00:14:53.099 "data_size": 65536 00:14:53.099 }, 00:14:53.099 { 00:14:53.099 "name": "BaseBdev3", 00:14:53.099 "uuid": "1a0cd151-e4a9-4514-84d1-b42c1aa340a7", 00:14:53.099 "is_configured": true, 00:14:53.099 "data_offset": 0, 00:14:53.099 "data_size": 65536 00:14:53.099 } 00:14:53.099 ] 00:14:53.099 }' 00:14:53.099 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.099 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.358 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.358 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.358 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:53.358 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.358 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.358 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:53.358 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.617 [2024-10-05 08:51:29.833060] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.617 "name": "Existed_Raid", 00:14:53.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.617 "strip_size_kb": 64, 00:14:53.617 "state": "configuring", 00:14:53.617 "raid_level": "raid5f", 00:14:53.617 "superblock": false, 00:14:53.617 "num_base_bdevs": 3, 00:14:53.617 "num_base_bdevs_discovered": 1, 00:14:53.617 "num_base_bdevs_operational": 3, 00:14:53.617 "base_bdevs_list": [ 00:14:53.617 { 00:14:53.617 "name": "BaseBdev1", 00:14:53.617 "uuid": "286c3c5d-b2c6-4d28-86b2-2af0d5992719", 00:14:53.617 "is_configured": true, 00:14:53.617 "data_offset": 0, 00:14:53.617 "data_size": 65536 00:14:53.617 }, 00:14:53.617 { 00:14:53.617 "name": null, 00:14:53.617 "uuid": "fad31194-af83-458d-83fc-9eb2b5ead9f2", 00:14:53.617 "is_configured": false, 00:14:53.617 "data_offset": 0, 00:14:53.617 "data_size": 65536 00:14:53.617 }, 00:14:53.617 { 00:14:53.617 "name": null, 00:14:53.617 "uuid": "1a0cd151-e4a9-4514-84d1-b42c1aa340a7", 00:14:53.617 "is_configured": false, 00:14:53.617 "data_offset": 0, 00:14:53.617 "data_size": 65536 00:14:53.617 } 00:14:53.617 ] 00:14:53.617 }' 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.617 08:51:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.876 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.876 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.876 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.876 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.170 [2024-10-05 08:51:30.397098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.170 "name": "Existed_Raid", 00:14:54.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.170 "strip_size_kb": 64, 00:14:54.170 "state": "configuring", 00:14:54.170 "raid_level": "raid5f", 00:14:54.170 "superblock": false, 00:14:54.170 "num_base_bdevs": 3, 00:14:54.170 "num_base_bdevs_discovered": 2, 00:14:54.170 "num_base_bdevs_operational": 3, 00:14:54.170 "base_bdevs_list": [ 00:14:54.170 { 00:14:54.170 "name": "BaseBdev1", 00:14:54.170 "uuid": "286c3c5d-b2c6-4d28-86b2-2af0d5992719", 00:14:54.170 "is_configured": true, 00:14:54.170 "data_offset": 0, 00:14:54.170 "data_size": 65536 00:14:54.170 }, 00:14:54.170 { 00:14:54.170 "name": null, 00:14:54.170 "uuid": "fad31194-af83-458d-83fc-9eb2b5ead9f2", 00:14:54.170 "is_configured": false, 00:14:54.170 "data_offset": 0, 00:14:54.170 "data_size": 65536 00:14:54.170 }, 00:14:54.170 { 00:14:54.170 "name": "BaseBdev3", 00:14:54.170 "uuid": "1a0cd151-e4a9-4514-84d1-b42c1aa340a7", 00:14:54.170 "is_configured": true, 00:14:54.170 "data_offset": 0, 00:14:54.170 "data_size": 65536 00:14:54.170 } 00:14:54.170 ] 00:14:54.170 }' 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.170 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.429 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.429 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:54.429 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.429 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.688 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.688 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:54.688 08:51:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:54.688 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.688 08:51:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.688 [2024-10-05 08:51:30.945116] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:54.688 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.689 "name": "Existed_Raid", 00:14:54.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.689 "strip_size_kb": 64, 00:14:54.689 "state": "configuring", 00:14:54.689 "raid_level": "raid5f", 00:14:54.689 "superblock": false, 00:14:54.689 "num_base_bdevs": 3, 00:14:54.689 "num_base_bdevs_discovered": 1, 00:14:54.689 "num_base_bdevs_operational": 3, 00:14:54.689 "base_bdevs_list": [ 00:14:54.689 { 00:14:54.689 "name": null, 00:14:54.689 "uuid": "286c3c5d-b2c6-4d28-86b2-2af0d5992719", 00:14:54.689 "is_configured": false, 00:14:54.689 "data_offset": 0, 00:14:54.689 "data_size": 65536 00:14:54.689 }, 00:14:54.689 { 00:14:54.689 "name": null, 00:14:54.689 "uuid": "fad31194-af83-458d-83fc-9eb2b5ead9f2", 00:14:54.689 "is_configured": false, 00:14:54.689 "data_offset": 0, 00:14:54.689 "data_size": 65536 00:14:54.689 }, 00:14:54.689 { 00:14:54.689 "name": "BaseBdev3", 00:14:54.689 "uuid": "1a0cd151-e4a9-4514-84d1-b42c1aa340a7", 00:14:54.689 "is_configured": true, 00:14:54.689 "data_offset": 0, 00:14:54.689 "data_size": 65536 00:14:54.689 } 00:14:54.689 ] 00:14:54.689 }' 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.689 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.258 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.259 [2024-10-05 08:51:31.573102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.259 "name": "Existed_Raid", 00:14:55.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.259 "strip_size_kb": 64, 00:14:55.259 "state": "configuring", 00:14:55.259 "raid_level": "raid5f", 00:14:55.259 "superblock": false, 00:14:55.259 "num_base_bdevs": 3, 00:14:55.259 "num_base_bdevs_discovered": 2, 00:14:55.259 "num_base_bdevs_operational": 3, 00:14:55.259 "base_bdevs_list": [ 00:14:55.259 { 00:14:55.259 "name": null, 00:14:55.259 "uuid": "286c3c5d-b2c6-4d28-86b2-2af0d5992719", 00:14:55.259 "is_configured": false, 00:14:55.259 "data_offset": 0, 00:14:55.259 "data_size": 65536 00:14:55.259 }, 00:14:55.259 { 00:14:55.259 "name": "BaseBdev2", 00:14:55.259 "uuid": "fad31194-af83-458d-83fc-9eb2b5ead9f2", 00:14:55.259 "is_configured": true, 00:14:55.259 "data_offset": 0, 00:14:55.259 "data_size": 65536 00:14:55.259 }, 00:14:55.259 { 00:14:55.259 "name": "BaseBdev3", 00:14:55.259 "uuid": "1a0cd151-e4a9-4514-84d1-b42c1aa340a7", 00:14:55.259 "is_configured": true, 00:14:55.259 "data_offset": 0, 00:14:55.259 "data_size": 65536 00:14:55.259 } 00:14:55.259 ] 00:14:55.259 }' 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.259 08:51:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 286c3c5d-b2c6-4d28-86b2-2af0d5992719 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.830 [2024-10-05 08:51:32.160345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:55.830 [2024-10-05 08:51:32.160392] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:55.830 [2024-10-05 08:51:32.160404] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:55.830 [2024-10-05 08:51:32.160630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:55.830 [2024-10-05 08:51:32.165420] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:55.830 [2024-10-05 08:51:32.165443] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:55.830 [2024-10-05 08:51:32.165697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.830 NewBaseBdev 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.830 [ 00:14:55.830 { 00:14:55.830 "name": "NewBaseBdev", 00:14:55.830 "aliases": [ 00:14:55.830 "286c3c5d-b2c6-4d28-86b2-2af0d5992719" 00:14:55.830 ], 00:14:55.830 "product_name": "Malloc disk", 00:14:55.830 "block_size": 512, 00:14:55.830 "num_blocks": 65536, 00:14:55.830 "uuid": "286c3c5d-b2c6-4d28-86b2-2af0d5992719", 00:14:55.830 "assigned_rate_limits": { 00:14:55.830 "rw_ios_per_sec": 0, 00:14:55.830 "rw_mbytes_per_sec": 0, 00:14:55.830 "r_mbytes_per_sec": 0, 00:14:55.830 "w_mbytes_per_sec": 0 00:14:55.830 }, 00:14:55.830 "claimed": true, 00:14:55.830 "claim_type": "exclusive_write", 00:14:55.830 "zoned": false, 00:14:55.830 "supported_io_types": { 00:14:55.830 "read": true, 00:14:55.830 "write": true, 00:14:55.830 "unmap": true, 00:14:55.830 "flush": true, 00:14:55.830 "reset": true, 00:14:55.830 "nvme_admin": false, 00:14:55.830 "nvme_io": false, 00:14:55.830 "nvme_io_md": false, 00:14:55.830 "write_zeroes": true, 00:14:55.830 "zcopy": true, 00:14:55.830 "get_zone_info": false, 00:14:55.830 "zone_management": false, 00:14:55.830 "zone_append": false, 00:14:55.830 "compare": false, 00:14:55.830 "compare_and_write": false, 00:14:55.830 "abort": true, 00:14:55.830 "seek_hole": false, 00:14:55.830 "seek_data": false, 00:14:55.830 "copy": true, 00:14:55.830 "nvme_iov_md": false 00:14:55.830 }, 00:14:55.830 "memory_domains": [ 00:14:55.830 { 00:14:55.830 "dma_device_id": "system", 00:14:55.830 "dma_device_type": 1 00:14:55.830 }, 00:14:55.830 { 00:14:55.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.830 "dma_device_type": 2 00:14:55.830 } 00:14:55.830 ], 00:14:55.830 "driver_specific": {} 00:14:55.830 } 00:14:55.830 ] 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.830 "name": "Existed_Raid", 00:14:55.830 "uuid": "0e98d877-cff3-477f-b492-58e254b45478", 00:14:55.830 "strip_size_kb": 64, 00:14:55.830 "state": "online", 00:14:55.830 "raid_level": "raid5f", 00:14:55.830 "superblock": false, 00:14:55.830 "num_base_bdevs": 3, 00:14:55.830 "num_base_bdevs_discovered": 3, 00:14:55.830 "num_base_bdevs_operational": 3, 00:14:55.830 "base_bdevs_list": [ 00:14:55.830 { 00:14:55.830 "name": "NewBaseBdev", 00:14:55.830 "uuid": "286c3c5d-b2c6-4d28-86b2-2af0d5992719", 00:14:55.830 "is_configured": true, 00:14:55.830 "data_offset": 0, 00:14:55.830 "data_size": 65536 00:14:55.830 }, 00:14:55.830 { 00:14:55.830 "name": "BaseBdev2", 00:14:55.830 "uuid": "fad31194-af83-458d-83fc-9eb2b5ead9f2", 00:14:55.830 "is_configured": true, 00:14:55.830 "data_offset": 0, 00:14:55.830 "data_size": 65536 00:14:55.830 }, 00:14:55.830 { 00:14:55.830 "name": "BaseBdev3", 00:14:55.830 "uuid": "1a0cd151-e4a9-4514-84d1-b42c1aa340a7", 00:14:55.830 "is_configured": true, 00:14:55.830 "data_offset": 0, 00:14:55.830 "data_size": 65536 00:14:55.830 } 00:14:55.830 ] 00:14:55.830 }' 00:14:55.830 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.831 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.400 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:56.400 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:56.400 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:56.400 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:56.400 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:56.400 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:56.400 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:56.400 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:56.400 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.400 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.400 [2024-10-05 08:51:32.694946] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.400 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.400 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:56.400 "name": "Existed_Raid", 00:14:56.400 "aliases": [ 00:14:56.400 "0e98d877-cff3-477f-b492-58e254b45478" 00:14:56.400 ], 00:14:56.400 "product_name": "Raid Volume", 00:14:56.400 "block_size": 512, 00:14:56.400 "num_blocks": 131072, 00:14:56.400 "uuid": "0e98d877-cff3-477f-b492-58e254b45478", 00:14:56.400 "assigned_rate_limits": { 00:14:56.400 "rw_ios_per_sec": 0, 00:14:56.400 "rw_mbytes_per_sec": 0, 00:14:56.400 "r_mbytes_per_sec": 0, 00:14:56.400 "w_mbytes_per_sec": 0 00:14:56.400 }, 00:14:56.400 "claimed": false, 00:14:56.400 "zoned": false, 00:14:56.400 "supported_io_types": { 00:14:56.400 "read": true, 00:14:56.400 "write": true, 00:14:56.400 "unmap": false, 00:14:56.400 "flush": false, 00:14:56.400 "reset": true, 00:14:56.400 "nvme_admin": false, 00:14:56.400 "nvme_io": false, 00:14:56.400 "nvme_io_md": false, 00:14:56.400 "write_zeroes": true, 00:14:56.400 "zcopy": false, 00:14:56.400 "get_zone_info": false, 00:14:56.400 "zone_management": false, 00:14:56.400 "zone_append": false, 00:14:56.400 "compare": false, 00:14:56.400 "compare_and_write": false, 00:14:56.400 "abort": false, 00:14:56.400 "seek_hole": false, 00:14:56.400 "seek_data": false, 00:14:56.400 "copy": false, 00:14:56.400 "nvme_iov_md": false 00:14:56.400 }, 00:14:56.400 "driver_specific": { 00:14:56.400 "raid": { 00:14:56.400 "uuid": "0e98d877-cff3-477f-b492-58e254b45478", 00:14:56.400 "strip_size_kb": 64, 00:14:56.400 "state": "online", 00:14:56.400 "raid_level": "raid5f", 00:14:56.400 "superblock": false, 00:14:56.400 "num_base_bdevs": 3, 00:14:56.400 "num_base_bdevs_discovered": 3, 00:14:56.400 "num_base_bdevs_operational": 3, 00:14:56.401 "base_bdevs_list": [ 00:14:56.401 { 00:14:56.401 "name": "NewBaseBdev", 00:14:56.401 "uuid": "286c3c5d-b2c6-4d28-86b2-2af0d5992719", 00:14:56.401 "is_configured": true, 00:14:56.401 "data_offset": 0, 00:14:56.401 "data_size": 65536 00:14:56.401 }, 00:14:56.401 { 00:14:56.401 "name": "BaseBdev2", 00:14:56.401 "uuid": "fad31194-af83-458d-83fc-9eb2b5ead9f2", 00:14:56.401 "is_configured": true, 00:14:56.401 "data_offset": 0, 00:14:56.401 "data_size": 65536 00:14:56.401 }, 00:14:56.401 { 00:14:56.401 "name": "BaseBdev3", 00:14:56.401 "uuid": "1a0cd151-e4a9-4514-84d1-b42c1aa340a7", 00:14:56.401 "is_configured": true, 00:14:56.401 "data_offset": 0, 00:14:56.401 "data_size": 65536 00:14:56.401 } 00:14:56.401 ] 00:14:56.401 } 00:14:56.401 } 00:14:56.401 }' 00:14:56.401 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:56.401 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:56.401 BaseBdev2 00:14:56.401 BaseBdev3' 00:14:56.401 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.401 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:56.401 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.401 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:56.401 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.401 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.401 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.401 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.661 [2024-10-05 08:51:32.994248] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.661 [2024-10-05 08:51:32.994273] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.661 [2024-10-05 08:51:32.994339] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.661 [2024-10-05 08:51:32.994604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.661 [2024-10-05 08:51:32.994624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 77139 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 77139 ']' 00:14:56.661 08:51:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 77139 00:14:56.661 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:56.661 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.661 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77139 00:14:56.661 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:56.661 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:56.661 killing process with pid 77139 00:14:56.661 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77139' 00:14:56.661 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 77139 00:14:56.661 [2024-10-05 08:51:33.041782] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.661 08:51:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 77139 00:14:56.921 [2024-10-05 08:51:33.323116] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.303 08:51:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:58.303 00:14:58.303 real 0m11.312s 00:14:58.303 user 0m18.090s 00:14:58.303 sys 0m2.142s 00:14:58.303 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.304 ************************************ 00:14:58.304 END TEST raid5f_state_function_test 00:14:58.304 ************************************ 00:14:58.304 08:51:34 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:58.304 08:51:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:58.304 08:51:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.304 08:51:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.304 ************************************ 00:14:58.304 START TEST raid5f_state_function_test_sb 00:14:58.304 ************************************ 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77700 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:58.304 Process raid pid: 77700 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77700' 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77700 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77700 ']' 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.304 08:51:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.304 [2024-10-05 08:51:34.697205] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:14:58.304 [2024-10-05 08:51:34.697337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.564 [2024-10-05 08:51:34.855972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.824 [2024-10-05 08:51:35.045297] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.824 [2024-10-05 08:51:35.220739] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.824 [2024-10-05 08:51:35.220777] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.083 08:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.083 08:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:59.083 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:59.083 08:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.083 08:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.083 [2024-10-05 08:51:35.510275] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.083 [2024-10-05 08:51:35.510343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.083 [2024-10-05 08:51:35.510353] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.084 [2024-10-05 08:51:35.510363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.084 [2024-10-05 08:51:35.510369] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:59.084 [2024-10-05 08:51:35.510377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.084 08:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.343 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.343 "name": "Existed_Raid", 00:14:59.343 "uuid": "87079b22-9b7a-4d9c-ab28-b1cd5516746d", 00:14:59.343 "strip_size_kb": 64, 00:14:59.343 "state": "configuring", 00:14:59.343 "raid_level": "raid5f", 00:14:59.343 "superblock": true, 00:14:59.343 "num_base_bdevs": 3, 00:14:59.343 "num_base_bdevs_discovered": 0, 00:14:59.343 "num_base_bdevs_operational": 3, 00:14:59.343 "base_bdevs_list": [ 00:14:59.343 { 00:14:59.343 "name": "BaseBdev1", 00:14:59.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.343 "is_configured": false, 00:14:59.343 "data_offset": 0, 00:14:59.343 "data_size": 0 00:14:59.343 }, 00:14:59.343 { 00:14:59.343 "name": "BaseBdev2", 00:14:59.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.343 "is_configured": false, 00:14:59.343 "data_offset": 0, 00:14:59.343 "data_size": 0 00:14:59.343 }, 00:14:59.343 { 00:14:59.343 "name": "BaseBdev3", 00:14:59.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.343 "is_configured": false, 00:14:59.343 "data_offset": 0, 00:14:59.343 "data_size": 0 00:14:59.343 } 00:14:59.343 ] 00:14:59.343 }' 00:14:59.343 08:51:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.343 08:51:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.602 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:59.602 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.602 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.602 [2024-10-05 08:51:36.017325] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.602 [2024-10-05 08:51:36.017366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:59.602 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.602 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:59.602 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.602 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.602 [2024-10-05 08:51:36.029341] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.602 [2024-10-05 08:51:36.029385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.602 [2024-10-05 08:51:36.029394] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.602 [2024-10-05 08:51:36.029403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.603 [2024-10-05 08:51:36.029408] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:59.603 [2024-10-05 08:51:36.029417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:59.603 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.603 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:59.603 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.603 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.862 [2024-10-05 08:51:36.092523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.862 BaseBdev1 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.862 [ 00:14:59.862 { 00:14:59.862 "name": "BaseBdev1", 00:14:59.862 "aliases": [ 00:14:59.862 "da4fff3d-63ea-4dc3-a353-6477b4426e6e" 00:14:59.862 ], 00:14:59.862 "product_name": "Malloc disk", 00:14:59.862 "block_size": 512, 00:14:59.862 "num_blocks": 65536, 00:14:59.862 "uuid": "da4fff3d-63ea-4dc3-a353-6477b4426e6e", 00:14:59.862 "assigned_rate_limits": { 00:14:59.862 "rw_ios_per_sec": 0, 00:14:59.862 "rw_mbytes_per_sec": 0, 00:14:59.862 "r_mbytes_per_sec": 0, 00:14:59.862 "w_mbytes_per_sec": 0 00:14:59.862 }, 00:14:59.862 "claimed": true, 00:14:59.862 "claim_type": "exclusive_write", 00:14:59.862 "zoned": false, 00:14:59.862 "supported_io_types": { 00:14:59.862 "read": true, 00:14:59.862 "write": true, 00:14:59.862 "unmap": true, 00:14:59.862 "flush": true, 00:14:59.862 "reset": true, 00:14:59.862 "nvme_admin": false, 00:14:59.862 "nvme_io": false, 00:14:59.862 "nvme_io_md": false, 00:14:59.862 "write_zeroes": true, 00:14:59.862 "zcopy": true, 00:14:59.862 "get_zone_info": false, 00:14:59.862 "zone_management": false, 00:14:59.862 "zone_append": false, 00:14:59.862 "compare": false, 00:14:59.862 "compare_and_write": false, 00:14:59.862 "abort": true, 00:14:59.862 "seek_hole": false, 00:14:59.862 "seek_data": false, 00:14:59.862 "copy": true, 00:14:59.862 "nvme_iov_md": false 00:14:59.862 }, 00:14:59.862 "memory_domains": [ 00:14:59.862 { 00:14:59.862 "dma_device_id": "system", 00:14:59.862 "dma_device_type": 1 00:14:59.862 }, 00:14:59.862 { 00:14:59.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.862 "dma_device_type": 2 00:14:59.862 } 00:14:59.862 ], 00:14:59.862 "driver_specific": {} 00:14:59.862 } 00:14:59.862 ] 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.862 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.863 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.863 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.863 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.863 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.863 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.863 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.863 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.863 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.863 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.863 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.863 "name": "Existed_Raid", 00:14:59.863 "uuid": "4753f25b-94b7-4295-b69e-eb2306d02653", 00:14:59.863 "strip_size_kb": 64, 00:14:59.863 "state": "configuring", 00:14:59.863 "raid_level": "raid5f", 00:14:59.863 "superblock": true, 00:14:59.863 "num_base_bdevs": 3, 00:14:59.863 "num_base_bdevs_discovered": 1, 00:14:59.863 "num_base_bdevs_operational": 3, 00:14:59.863 "base_bdevs_list": [ 00:14:59.863 { 00:14:59.863 "name": "BaseBdev1", 00:14:59.863 "uuid": "da4fff3d-63ea-4dc3-a353-6477b4426e6e", 00:14:59.863 "is_configured": true, 00:14:59.863 "data_offset": 2048, 00:14:59.863 "data_size": 63488 00:14:59.863 }, 00:14:59.863 { 00:14:59.863 "name": "BaseBdev2", 00:14:59.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.863 "is_configured": false, 00:14:59.863 "data_offset": 0, 00:14:59.863 "data_size": 0 00:14:59.863 }, 00:14:59.863 { 00:14:59.863 "name": "BaseBdev3", 00:14:59.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.863 "is_configured": false, 00:14:59.863 "data_offset": 0, 00:14:59.863 "data_size": 0 00:14:59.863 } 00:14:59.863 ] 00:14:59.863 }' 00:14:59.863 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.863 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.122 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.123 [2024-10-05 08:51:36.563731] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.123 [2024-10-05 08:51:36.563773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.123 [2024-10-05 08:51:36.575762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.123 [2024-10-05 08:51:36.577453] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.123 [2024-10-05 08:51:36.577495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.123 [2024-10-05 08:51:36.577504] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.123 [2024-10-05 08:51:36.577513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.123 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.382 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.382 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.382 "name": "Existed_Raid", 00:15:00.382 "uuid": "75910d84-e257-459b-a975-9148a6293e98", 00:15:00.382 "strip_size_kb": 64, 00:15:00.382 "state": "configuring", 00:15:00.382 "raid_level": "raid5f", 00:15:00.382 "superblock": true, 00:15:00.382 "num_base_bdevs": 3, 00:15:00.382 "num_base_bdevs_discovered": 1, 00:15:00.382 "num_base_bdevs_operational": 3, 00:15:00.382 "base_bdevs_list": [ 00:15:00.382 { 00:15:00.382 "name": "BaseBdev1", 00:15:00.382 "uuid": "da4fff3d-63ea-4dc3-a353-6477b4426e6e", 00:15:00.382 "is_configured": true, 00:15:00.382 "data_offset": 2048, 00:15:00.382 "data_size": 63488 00:15:00.382 }, 00:15:00.382 { 00:15:00.382 "name": "BaseBdev2", 00:15:00.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.382 "is_configured": false, 00:15:00.382 "data_offset": 0, 00:15:00.382 "data_size": 0 00:15:00.382 }, 00:15:00.382 { 00:15:00.382 "name": "BaseBdev3", 00:15:00.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.382 "is_configured": false, 00:15:00.382 "data_offset": 0, 00:15:00.382 "data_size": 0 00:15:00.382 } 00:15:00.382 ] 00:15:00.382 }' 00:15:00.382 08:51:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.382 08:51:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.642 [2024-10-05 08:51:37.105135] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.642 BaseBdev2 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.642 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.901 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.902 [ 00:15:00.902 { 00:15:00.902 "name": "BaseBdev2", 00:15:00.902 "aliases": [ 00:15:00.902 "958cb98e-e540-4754-962c-a617c87f99c6" 00:15:00.902 ], 00:15:00.902 "product_name": "Malloc disk", 00:15:00.902 "block_size": 512, 00:15:00.902 "num_blocks": 65536, 00:15:00.902 "uuid": "958cb98e-e540-4754-962c-a617c87f99c6", 00:15:00.902 "assigned_rate_limits": { 00:15:00.902 "rw_ios_per_sec": 0, 00:15:00.902 "rw_mbytes_per_sec": 0, 00:15:00.902 "r_mbytes_per_sec": 0, 00:15:00.902 "w_mbytes_per_sec": 0 00:15:00.902 }, 00:15:00.902 "claimed": true, 00:15:00.902 "claim_type": "exclusive_write", 00:15:00.902 "zoned": false, 00:15:00.902 "supported_io_types": { 00:15:00.902 "read": true, 00:15:00.902 "write": true, 00:15:00.902 "unmap": true, 00:15:00.902 "flush": true, 00:15:00.902 "reset": true, 00:15:00.902 "nvme_admin": false, 00:15:00.902 "nvme_io": false, 00:15:00.902 "nvme_io_md": false, 00:15:00.902 "write_zeroes": true, 00:15:00.902 "zcopy": true, 00:15:00.902 "get_zone_info": false, 00:15:00.902 "zone_management": false, 00:15:00.902 "zone_append": false, 00:15:00.902 "compare": false, 00:15:00.902 "compare_and_write": false, 00:15:00.902 "abort": true, 00:15:00.902 "seek_hole": false, 00:15:00.902 "seek_data": false, 00:15:00.902 "copy": true, 00:15:00.902 "nvme_iov_md": false 00:15:00.902 }, 00:15:00.902 "memory_domains": [ 00:15:00.902 { 00:15:00.902 "dma_device_id": "system", 00:15:00.902 "dma_device_type": 1 00:15:00.902 }, 00:15:00.902 { 00:15:00.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.902 "dma_device_type": 2 00:15:00.902 } 00:15:00.902 ], 00:15:00.902 "driver_specific": {} 00:15:00.902 } 00:15:00.902 ] 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.902 "name": "Existed_Raid", 00:15:00.902 "uuid": "75910d84-e257-459b-a975-9148a6293e98", 00:15:00.902 "strip_size_kb": 64, 00:15:00.902 "state": "configuring", 00:15:00.902 "raid_level": "raid5f", 00:15:00.902 "superblock": true, 00:15:00.902 "num_base_bdevs": 3, 00:15:00.902 "num_base_bdevs_discovered": 2, 00:15:00.902 "num_base_bdevs_operational": 3, 00:15:00.902 "base_bdevs_list": [ 00:15:00.902 { 00:15:00.902 "name": "BaseBdev1", 00:15:00.902 "uuid": "da4fff3d-63ea-4dc3-a353-6477b4426e6e", 00:15:00.902 "is_configured": true, 00:15:00.902 "data_offset": 2048, 00:15:00.902 "data_size": 63488 00:15:00.902 }, 00:15:00.902 { 00:15:00.902 "name": "BaseBdev2", 00:15:00.902 "uuid": "958cb98e-e540-4754-962c-a617c87f99c6", 00:15:00.902 "is_configured": true, 00:15:00.902 "data_offset": 2048, 00:15:00.902 "data_size": 63488 00:15:00.902 }, 00:15:00.902 { 00:15:00.902 "name": "BaseBdev3", 00:15:00.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.902 "is_configured": false, 00:15:00.902 "data_offset": 0, 00:15:00.902 "data_size": 0 00:15:00.902 } 00:15:00.902 ] 00:15:00.902 }' 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.902 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.162 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:01.162 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.162 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.162 [2024-10-05 08:51:37.630863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.162 [2024-10-05 08:51:37.631153] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:01.162 [2024-10-05 08:51:37.631182] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:01.162 [2024-10-05 08:51:37.631430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:01.162 BaseBdev3 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.422 [2024-10-05 08:51:37.637243] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:01.422 [2024-10-05 08:51:37.637267] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:01.422 [2024-10-05 08:51:37.637413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.422 [ 00:15:01.422 { 00:15:01.422 "name": "BaseBdev3", 00:15:01.422 "aliases": [ 00:15:01.422 "28ddc47a-2d35-43d9-84f9-da6d2e8758d2" 00:15:01.422 ], 00:15:01.422 "product_name": "Malloc disk", 00:15:01.422 "block_size": 512, 00:15:01.422 "num_blocks": 65536, 00:15:01.422 "uuid": "28ddc47a-2d35-43d9-84f9-da6d2e8758d2", 00:15:01.422 "assigned_rate_limits": { 00:15:01.422 "rw_ios_per_sec": 0, 00:15:01.422 "rw_mbytes_per_sec": 0, 00:15:01.422 "r_mbytes_per_sec": 0, 00:15:01.422 "w_mbytes_per_sec": 0 00:15:01.422 }, 00:15:01.422 "claimed": true, 00:15:01.422 "claim_type": "exclusive_write", 00:15:01.422 "zoned": false, 00:15:01.422 "supported_io_types": { 00:15:01.422 "read": true, 00:15:01.422 "write": true, 00:15:01.422 "unmap": true, 00:15:01.422 "flush": true, 00:15:01.422 "reset": true, 00:15:01.422 "nvme_admin": false, 00:15:01.422 "nvme_io": false, 00:15:01.422 "nvme_io_md": false, 00:15:01.422 "write_zeroes": true, 00:15:01.422 "zcopy": true, 00:15:01.422 "get_zone_info": false, 00:15:01.422 "zone_management": false, 00:15:01.422 "zone_append": false, 00:15:01.422 "compare": false, 00:15:01.422 "compare_and_write": false, 00:15:01.422 "abort": true, 00:15:01.422 "seek_hole": false, 00:15:01.422 "seek_data": false, 00:15:01.422 "copy": true, 00:15:01.422 "nvme_iov_md": false 00:15:01.422 }, 00:15:01.422 "memory_domains": [ 00:15:01.422 { 00:15:01.422 "dma_device_id": "system", 00:15:01.422 "dma_device_type": 1 00:15:01.422 }, 00:15:01.422 { 00:15:01.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.422 "dma_device_type": 2 00:15:01.422 } 00:15:01.422 ], 00:15:01.422 "driver_specific": {} 00:15:01.422 } 00:15:01.422 ] 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.422 "name": "Existed_Raid", 00:15:01.422 "uuid": "75910d84-e257-459b-a975-9148a6293e98", 00:15:01.422 "strip_size_kb": 64, 00:15:01.422 "state": "online", 00:15:01.422 "raid_level": "raid5f", 00:15:01.422 "superblock": true, 00:15:01.422 "num_base_bdevs": 3, 00:15:01.422 "num_base_bdevs_discovered": 3, 00:15:01.422 "num_base_bdevs_operational": 3, 00:15:01.422 "base_bdevs_list": [ 00:15:01.422 { 00:15:01.422 "name": "BaseBdev1", 00:15:01.422 "uuid": "da4fff3d-63ea-4dc3-a353-6477b4426e6e", 00:15:01.422 "is_configured": true, 00:15:01.422 "data_offset": 2048, 00:15:01.422 "data_size": 63488 00:15:01.422 }, 00:15:01.422 { 00:15:01.422 "name": "BaseBdev2", 00:15:01.422 "uuid": "958cb98e-e540-4754-962c-a617c87f99c6", 00:15:01.422 "is_configured": true, 00:15:01.422 "data_offset": 2048, 00:15:01.422 "data_size": 63488 00:15:01.422 }, 00:15:01.422 { 00:15:01.422 "name": "BaseBdev3", 00:15:01.422 "uuid": "28ddc47a-2d35-43d9-84f9-da6d2e8758d2", 00:15:01.422 "is_configured": true, 00:15:01.422 "data_offset": 2048, 00:15:01.422 "data_size": 63488 00:15:01.422 } 00:15:01.422 ] 00:15:01.422 }' 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.422 08:51:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.992 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:01.992 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:01.992 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:01.992 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:01.992 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:01.992 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:01.992 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:01.992 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.992 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.992 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:01.992 [2024-10-05 08:51:38.178674] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.992 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.992 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:01.992 "name": "Existed_Raid", 00:15:01.992 "aliases": [ 00:15:01.992 "75910d84-e257-459b-a975-9148a6293e98" 00:15:01.992 ], 00:15:01.993 "product_name": "Raid Volume", 00:15:01.993 "block_size": 512, 00:15:01.993 "num_blocks": 126976, 00:15:01.993 "uuid": "75910d84-e257-459b-a975-9148a6293e98", 00:15:01.993 "assigned_rate_limits": { 00:15:01.993 "rw_ios_per_sec": 0, 00:15:01.993 "rw_mbytes_per_sec": 0, 00:15:01.993 "r_mbytes_per_sec": 0, 00:15:01.993 "w_mbytes_per_sec": 0 00:15:01.993 }, 00:15:01.993 "claimed": false, 00:15:01.993 "zoned": false, 00:15:01.993 "supported_io_types": { 00:15:01.993 "read": true, 00:15:01.993 "write": true, 00:15:01.993 "unmap": false, 00:15:01.993 "flush": false, 00:15:01.993 "reset": true, 00:15:01.993 "nvme_admin": false, 00:15:01.993 "nvme_io": false, 00:15:01.993 "nvme_io_md": false, 00:15:01.993 "write_zeroes": true, 00:15:01.993 "zcopy": false, 00:15:01.993 "get_zone_info": false, 00:15:01.993 "zone_management": false, 00:15:01.993 "zone_append": false, 00:15:01.993 "compare": false, 00:15:01.993 "compare_and_write": false, 00:15:01.993 "abort": false, 00:15:01.993 "seek_hole": false, 00:15:01.993 "seek_data": false, 00:15:01.993 "copy": false, 00:15:01.993 "nvme_iov_md": false 00:15:01.993 }, 00:15:01.993 "driver_specific": { 00:15:01.993 "raid": { 00:15:01.993 "uuid": "75910d84-e257-459b-a975-9148a6293e98", 00:15:01.993 "strip_size_kb": 64, 00:15:01.993 "state": "online", 00:15:01.993 "raid_level": "raid5f", 00:15:01.993 "superblock": true, 00:15:01.993 "num_base_bdevs": 3, 00:15:01.993 "num_base_bdevs_discovered": 3, 00:15:01.993 "num_base_bdevs_operational": 3, 00:15:01.993 "base_bdevs_list": [ 00:15:01.993 { 00:15:01.993 "name": "BaseBdev1", 00:15:01.993 "uuid": "da4fff3d-63ea-4dc3-a353-6477b4426e6e", 00:15:01.993 "is_configured": true, 00:15:01.993 "data_offset": 2048, 00:15:01.993 "data_size": 63488 00:15:01.993 }, 00:15:01.993 { 00:15:01.993 "name": "BaseBdev2", 00:15:01.993 "uuid": "958cb98e-e540-4754-962c-a617c87f99c6", 00:15:01.993 "is_configured": true, 00:15:01.993 "data_offset": 2048, 00:15:01.993 "data_size": 63488 00:15:01.993 }, 00:15:01.993 { 00:15:01.993 "name": "BaseBdev3", 00:15:01.993 "uuid": "28ddc47a-2d35-43d9-84f9-da6d2e8758d2", 00:15:01.993 "is_configured": true, 00:15:01.993 "data_offset": 2048, 00:15:01.993 "data_size": 63488 00:15:01.993 } 00:15:01.993 ] 00:15:01.993 } 00:15:01.993 } 00:15:01.993 }' 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:01.993 BaseBdev2 00:15:01.993 BaseBdev3' 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.993 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.993 [2024-10-05 08:51:38.458068] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.254 "name": "Existed_Raid", 00:15:02.254 "uuid": "75910d84-e257-459b-a975-9148a6293e98", 00:15:02.254 "strip_size_kb": 64, 00:15:02.254 "state": "online", 00:15:02.254 "raid_level": "raid5f", 00:15:02.254 "superblock": true, 00:15:02.254 "num_base_bdevs": 3, 00:15:02.254 "num_base_bdevs_discovered": 2, 00:15:02.254 "num_base_bdevs_operational": 2, 00:15:02.254 "base_bdevs_list": [ 00:15:02.254 { 00:15:02.254 "name": null, 00:15:02.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.254 "is_configured": false, 00:15:02.254 "data_offset": 0, 00:15:02.254 "data_size": 63488 00:15:02.254 }, 00:15:02.254 { 00:15:02.254 "name": "BaseBdev2", 00:15:02.254 "uuid": "958cb98e-e540-4754-962c-a617c87f99c6", 00:15:02.254 "is_configured": true, 00:15:02.254 "data_offset": 2048, 00:15:02.254 "data_size": 63488 00:15:02.254 }, 00:15:02.254 { 00:15:02.254 "name": "BaseBdev3", 00:15:02.254 "uuid": "28ddc47a-2d35-43d9-84f9-da6d2e8758d2", 00:15:02.254 "is_configured": true, 00:15:02.254 "data_offset": 2048, 00:15:02.254 "data_size": 63488 00:15:02.254 } 00:15:02.254 ] 00:15:02.254 }' 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.254 08:51:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.823 [2024-10-05 08:51:39.063842] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:02.823 [2024-10-05 08:51:39.063998] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.823 [2024-10-05 08:51:39.153671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.823 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.823 [2024-10-05 08:51:39.213594] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:02.823 [2024-10-05 08:51:39.213644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.083 BaseBdev2 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.083 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.083 [ 00:15:03.083 { 00:15:03.083 "name": "BaseBdev2", 00:15:03.083 "aliases": [ 00:15:03.083 "40274f55-329c-494f-b541-5e4d0e93e736" 00:15:03.083 ], 00:15:03.083 "product_name": "Malloc disk", 00:15:03.083 "block_size": 512, 00:15:03.083 "num_blocks": 65536, 00:15:03.083 "uuid": "40274f55-329c-494f-b541-5e4d0e93e736", 00:15:03.083 "assigned_rate_limits": { 00:15:03.083 "rw_ios_per_sec": 0, 00:15:03.083 "rw_mbytes_per_sec": 0, 00:15:03.083 "r_mbytes_per_sec": 0, 00:15:03.083 "w_mbytes_per_sec": 0 00:15:03.083 }, 00:15:03.084 "claimed": false, 00:15:03.084 "zoned": false, 00:15:03.084 "supported_io_types": { 00:15:03.084 "read": true, 00:15:03.084 "write": true, 00:15:03.084 "unmap": true, 00:15:03.084 "flush": true, 00:15:03.084 "reset": true, 00:15:03.084 "nvme_admin": false, 00:15:03.084 "nvme_io": false, 00:15:03.084 "nvme_io_md": false, 00:15:03.084 "write_zeroes": true, 00:15:03.084 "zcopy": true, 00:15:03.084 "get_zone_info": false, 00:15:03.084 "zone_management": false, 00:15:03.084 "zone_append": false, 00:15:03.084 "compare": false, 00:15:03.084 "compare_and_write": false, 00:15:03.084 "abort": true, 00:15:03.084 "seek_hole": false, 00:15:03.084 "seek_data": false, 00:15:03.084 "copy": true, 00:15:03.084 "nvme_iov_md": false 00:15:03.084 }, 00:15:03.084 "memory_domains": [ 00:15:03.084 { 00:15:03.084 "dma_device_id": "system", 00:15:03.084 "dma_device_type": 1 00:15:03.084 }, 00:15:03.084 { 00:15:03.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.084 "dma_device_type": 2 00:15:03.084 } 00:15:03.084 ], 00:15:03.084 "driver_specific": {} 00:15:03.084 } 00:15:03.084 ] 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.084 BaseBdev3 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.084 [ 00:15:03.084 { 00:15:03.084 "name": "BaseBdev3", 00:15:03.084 "aliases": [ 00:15:03.084 "1971141c-2afb-4e9e-a3ad-62637155b812" 00:15:03.084 ], 00:15:03.084 "product_name": "Malloc disk", 00:15:03.084 "block_size": 512, 00:15:03.084 "num_blocks": 65536, 00:15:03.084 "uuid": "1971141c-2afb-4e9e-a3ad-62637155b812", 00:15:03.084 "assigned_rate_limits": { 00:15:03.084 "rw_ios_per_sec": 0, 00:15:03.084 "rw_mbytes_per_sec": 0, 00:15:03.084 "r_mbytes_per_sec": 0, 00:15:03.084 "w_mbytes_per_sec": 0 00:15:03.084 }, 00:15:03.084 "claimed": false, 00:15:03.084 "zoned": false, 00:15:03.084 "supported_io_types": { 00:15:03.084 "read": true, 00:15:03.084 "write": true, 00:15:03.084 "unmap": true, 00:15:03.084 "flush": true, 00:15:03.084 "reset": true, 00:15:03.084 "nvme_admin": false, 00:15:03.084 "nvme_io": false, 00:15:03.084 "nvme_io_md": false, 00:15:03.084 "write_zeroes": true, 00:15:03.084 "zcopy": true, 00:15:03.084 "get_zone_info": false, 00:15:03.084 "zone_management": false, 00:15:03.084 "zone_append": false, 00:15:03.084 "compare": false, 00:15:03.084 "compare_and_write": false, 00:15:03.084 "abort": true, 00:15:03.084 "seek_hole": false, 00:15:03.084 "seek_data": false, 00:15:03.084 "copy": true, 00:15:03.084 "nvme_iov_md": false 00:15:03.084 }, 00:15:03.084 "memory_domains": [ 00:15:03.084 { 00:15:03.084 "dma_device_id": "system", 00:15:03.084 "dma_device_type": 1 00:15:03.084 }, 00:15:03.084 { 00:15:03.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.084 "dma_device_type": 2 00:15:03.084 } 00:15:03.084 ], 00:15:03.084 "driver_specific": {} 00:15:03.084 } 00:15:03.084 ] 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.084 [2024-10-05 08:51:39.512157] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:03.084 [2024-10-05 08:51:39.512207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:03.084 [2024-10-05 08:51:39.512227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.084 [2024-10-05 08:51:39.513945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.084 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.344 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.344 "name": "Existed_Raid", 00:15:03.344 "uuid": "d0140c9e-8d31-4eb7-9994-8285178f1d8b", 00:15:03.344 "strip_size_kb": 64, 00:15:03.344 "state": "configuring", 00:15:03.344 "raid_level": "raid5f", 00:15:03.344 "superblock": true, 00:15:03.344 "num_base_bdevs": 3, 00:15:03.344 "num_base_bdevs_discovered": 2, 00:15:03.344 "num_base_bdevs_operational": 3, 00:15:03.344 "base_bdevs_list": [ 00:15:03.344 { 00:15:03.344 "name": "BaseBdev1", 00:15:03.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.344 "is_configured": false, 00:15:03.344 "data_offset": 0, 00:15:03.344 "data_size": 0 00:15:03.344 }, 00:15:03.344 { 00:15:03.344 "name": "BaseBdev2", 00:15:03.344 "uuid": "40274f55-329c-494f-b541-5e4d0e93e736", 00:15:03.344 "is_configured": true, 00:15:03.344 "data_offset": 2048, 00:15:03.344 "data_size": 63488 00:15:03.344 }, 00:15:03.344 { 00:15:03.344 "name": "BaseBdev3", 00:15:03.344 "uuid": "1971141c-2afb-4e9e-a3ad-62637155b812", 00:15:03.344 "is_configured": true, 00:15:03.344 "data_offset": 2048, 00:15:03.344 "data_size": 63488 00:15:03.344 } 00:15:03.344 ] 00:15:03.344 }' 00:15:03.344 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.344 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.604 [2024-10-05 08:51:39.963348] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.604 08:51:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.604 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.604 "name": "Existed_Raid", 00:15:03.604 "uuid": "d0140c9e-8d31-4eb7-9994-8285178f1d8b", 00:15:03.604 "strip_size_kb": 64, 00:15:03.604 "state": "configuring", 00:15:03.604 "raid_level": "raid5f", 00:15:03.604 "superblock": true, 00:15:03.604 "num_base_bdevs": 3, 00:15:03.604 "num_base_bdevs_discovered": 1, 00:15:03.604 "num_base_bdevs_operational": 3, 00:15:03.604 "base_bdevs_list": [ 00:15:03.604 { 00:15:03.604 "name": "BaseBdev1", 00:15:03.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.604 "is_configured": false, 00:15:03.604 "data_offset": 0, 00:15:03.604 "data_size": 0 00:15:03.604 }, 00:15:03.604 { 00:15:03.604 "name": null, 00:15:03.604 "uuid": "40274f55-329c-494f-b541-5e4d0e93e736", 00:15:03.604 "is_configured": false, 00:15:03.604 "data_offset": 0, 00:15:03.604 "data_size": 63488 00:15:03.604 }, 00:15:03.604 { 00:15:03.604 "name": "BaseBdev3", 00:15:03.604 "uuid": "1971141c-2afb-4e9e-a3ad-62637155b812", 00:15:03.604 "is_configured": true, 00:15:03.604 "data_offset": 2048, 00:15:03.604 "data_size": 63488 00:15:03.604 } 00:15:03.604 ] 00:15:03.604 }' 00:15:03.604 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.604 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.174 [2024-10-05 08:51:40.489665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.174 BaseBdev1 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.174 [ 00:15:04.174 { 00:15:04.174 "name": "BaseBdev1", 00:15:04.174 "aliases": [ 00:15:04.174 "68e7df18-69c8-44d3-9d46-81195258a1f0" 00:15:04.174 ], 00:15:04.174 "product_name": "Malloc disk", 00:15:04.174 "block_size": 512, 00:15:04.174 "num_blocks": 65536, 00:15:04.174 "uuid": "68e7df18-69c8-44d3-9d46-81195258a1f0", 00:15:04.174 "assigned_rate_limits": { 00:15:04.174 "rw_ios_per_sec": 0, 00:15:04.174 "rw_mbytes_per_sec": 0, 00:15:04.174 "r_mbytes_per_sec": 0, 00:15:04.174 "w_mbytes_per_sec": 0 00:15:04.174 }, 00:15:04.174 "claimed": true, 00:15:04.174 "claim_type": "exclusive_write", 00:15:04.174 "zoned": false, 00:15:04.174 "supported_io_types": { 00:15:04.174 "read": true, 00:15:04.174 "write": true, 00:15:04.174 "unmap": true, 00:15:04.174 "flush": true, 00:15:04.174 "reset": true, 00:15:04.174 "nvme_admin": false, 00:15:04.174 "nvme_io": false, 00:15:04.174 "nvme_io_md": false, 00:15:04.174 "write_zeroes": true, 00:15:04.174 "zcopy": true, 00:15:04.174 "get_zone_info": false, 00:15:04.174 "zone_management": false, 00:15:04.174 "zone_append": false, 00:15:04.174 "compare": false, 00:15:04.174 "compare_and_write": false, 00:15:04.174 "abort": true, 00:15:04.174 "seek_hole": false, 00:15:04.174 "seek_data": false, 00:15:04.174 "copy": true, 00:15:04.174 "nvme_iov_md": false 00:15:04.174 }, 00:15:04.174 "memory_domains": [ 00:15:04.174 { 00:15:04.174 "dma_device_id": "system", 00:15:04.174 "dma_device_type": 1 00:15:04.174 }, 00:15:04.174 { 00:15:04.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.174 "dma_device_type": 2 00:15:04.174 } 00:15:04.174 ], 00:15:04.174 "driver_specific": {} 00:15:04.174 } 00:15:04.174 ] 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.174 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.174 "name": "Existed_Raid", 00:15:04.174 "uuid": "d0140c9e-8d31-4eb7-9994-8285178f1d8b", 00:15:04.174 "strip_size_kb": 64, 00:15:04.174 "state": "configuring", 00:15:04.174 "raid_level": "raid5f", 00:15:04.174 "superblock": true, 00:15:04.174 "num_base_bdevs": 3, 00:15:04.174 "num_base_bdevs_discovered": 2, 00:15:04.174 "num_base_bdevs_operational": 3, 00:15:04.174 "base_bdevs_list": [ 00:15:04.174 { 00:15:04.174 "name": "BaseBdev1", 00:15:04.174 "uuid": "68e7df18-69c8-44d3-9d46-81195258a1f0", 00:15:04.174 "is_configured": true, 00:15:04.174 "data_offset": 2048, 00:15:04.174 "data_size": 63488 00:15:04.174 }, 00:15:04.174 { 00:15:04.174 "name": null, 00:15:04.174 "uuid": "40274f55-329c-494f-b541-5e4d0e93e736", 00:15:04.174 "is_configured": false, 00:15:04.174 "data_offset": 0, 00:15:04.174 "data_size": 63488 00:15:04.174 }, 00:15:04.174 { 00:15:04.174 "name": "BaseBdev3", 00:15:04.174 "uuid": "1971141c-2afb-4e9e-a3ad-62637155b812", 00:15:04.174 "is_configured": true, 00:15:04.174 "data_offset": 2048, 00:15:04.175 "data_size": 63488 00:15:04.175 } 00:15:04.175 ] 00:15:04.175 }' 00:15:04.175 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.175 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.743 [2024-10-05 08:51:40.985079] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.743 08:51:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.743 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.743 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.743 "name": "Existed_Raid", 00:15:04.743 "uuid": "d0140c9e-8d31-4eb7-9994-8285178f1d8b", 00:15:04.743 "strip_size_kb": 64, 00:15:04.743 "state": "configuring", 00:15:04.743 "raid_level": "raid5f", 00:15:04.743 "superblock": true, 00:15:04.743 "num_base_bdevs": 3, 00:15:04.743 "num_base_bdevs_discovered": 1, 00:15:04.743 "num_base_bdevs_operational": 3, 00:15:04.743 "base_bdevs_list": [ 00:15:04.743 { 00:15:04.743 "name": "BaseBdev1", 00:15:04.743 "uuid": "68e7df18-69c8-44d3-9d46-81195258a1f0", 00:15:04.743 "is_configured": true, 00:15:04.743 "data_offset": 2048, 00:15:04.743 "data_size": 63488 00:15:04.743 }, 00:15:04.743 { 00:15:04.743 "name": null, 00:15:04.743 "uuid": "40274f55-329c-494f-b541-5e4d0e93e736", 00:15:04.743 "is_configured": false, 00:15:04.743 "data_offset": 0, 00:15:04.743 "data_size": 63488 00:15:04.743 }, 00:15:04.743 { 00:15:04.743 "name": null, 00:15:04.743 "uuid": "1971141c-2afb-4e9e-a3ad-62637155b812", 00:15:04.743 "is_configured": false, 00:15:04.743 "data_offset": 0, 00:15:04.743 "data_size": 63488 00:15:04.743 } 00:15:04.743 ] 00:15:04.743 }' 00:15:04.743 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.743 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.003 [2024-10-05 08:51:41.452240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.003 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.262 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.262 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.262 "name": "Existed_Raid", 00:15:05.262 "uuid": "d0140c9e-8d31-4eb7-9994-8285178f1d8b", 00:15:05.262 "strip_size_kb": 64, 00:15:05.262 "state": "configuring", 00:15:05.262 "raid_level": "raid5f", 00:15:05.262 "superblock": true, 00:15:05.262 "num_base_bdevs": 3, 00:15:05.262 "num_base_bdevs_discovered": 2, 00:15:05.262 "num_base_bdevs_operational": 3, 00:15:05.262 "base_bdevs_list": [ 00:15:05.262 { 00:15:05.262 "name": "BaseBdev1", 00:15:05.262 "uuid": "68e7df18-69c8-44d3-9d46-81195258a1f0", 00:15:05.262 "is_configured": true, 00:15:05.262 "data_offset": 2048, 00:15:05.262 "data_size": 63488 00:15:05.262 }, 00:15:05.262 { 00:15:05.262 "name": null, 00:15:05.262 "uuid": "40274f55-329c-494f-b541-5e4d0e93e736", 00:15:05.262 "is_configured": false, 00:15:05.262 "data_offset": 0, 00:15:05.262 "data_size": 63488 00:15:05.262 }, 00:15:05.262 { 00:15:05.262 "name": "BaseBdev3", 00:15:05.262 "uuid": "1971141c-2afb-4e9e-a3ad-62637155b812", 00:15:05.262 "is_configured": true, 00:15:05.262 "data_offset": 2048, 00:15:05.262 "data_size": 63488 00:15:05.262 } 00:15:05.262 ] 00:15:05.262 }' 00:15:05.262 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.262 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.521 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.521 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.521 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:05.521 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.521 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.521 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:05.521 08:51:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:05.521 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.521 08:51:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.521 [2024-10-05 08:51:41.943484] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.780 "name": "Existed_Raid", 00:15:05.780 "uuid": "d0140c9e-8d31-4eb7-9994-8285178f1d8b", 00:15:05.780 "strip_size_kb": 64, 00:15:05.780 "state": "configuring", 00:15:05.780 "raid_level": "raid5f", 00:15:05.780 "superblock": true, 00:15:05.780 "num_base_bdevs": 3, 00:15:05.780 "num_base_bdevs_discovered": 1, 00:15:05.780 "num_base_bdevs_operational": 3, 00:15:05.780 "base_bdevs_list": [ 00:15:05.780 { 00:15:05.780 "name": null, 00:15:05.780 "uuid": "68e7df18-69c8-44d3-9d46-81195258a1f0", 00:15:05.780 "is_configured": false, 00:15:05.780 "data_offset": 0, 00:15:05.780 "data_size": 63488 00:15:05.780 }, 00:15:05.780 { 00:15:05.780 "name": null, 00:15:05.780 "uuid": "40274f55-329c-494f-b541-5e4d0e93e736", 00:15:05.780 "is_configured": false, 00:15:05.780 "data_offset": 0, 00:15:05.780 "data_size": 63488 00:15:05.780 }, 00:15:05.780 { 00:15:05.780 "name": "BaseBdev3", 00:15:05.780 "uuid": "1971141c-2afb-4e9e-a3ad-62637155b812", 00:15:05.780 "is_configured": true, 00:15:05.780 "data_offset": 2048, 00:15:05.780 "data_size": 63488 00:15:05.780 } 00:15:05.780 ] 00:15:05.780 }' 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.780 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.038 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:06.038 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.038 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.038 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.038 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.297 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.298 [2024-10-05 08:51:42.532785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.298 "name": "Existed_Raid", 00:15:06.298 "uuid": "d0140c9e-8d31-4eb7-9994-8285178f1d8b", 00:15:06.298 "strip_size_kb": 64, 00:15:06.298 "state": "configuring", 00:15:06.298 "raid_level": "raid5f", 00:15:06.298 "superblock": true, 00:15:06.298 "num_base_bdevs": 3, 00:15:06.298 "num_base_bdevs_discovered": 2, 00:15:06.298 "num_base_bdevs_operational": 3, 00:15:06.298 "base_bdevs_list": [ 00:15:06.298 { 00:15:06.298 "name": null, 00:15:06.298 "uuid": "68e7df18-69c8-44d3-9d46-81195258a1f0", 00:15:06.298 "is_configured": false, 00:15:06.298 "data_offset": 0, 00:15:06.298 "data_size": 63488 00:15:06.298 }, 00:15:06.298 { 00:15:06.298 "name": "BaseBdev2", 00:15:06.298 "uuid": "40274f55-329c-494f-b541-5e4d0e93e736", 00:15:06.298 "is_configured": true, 00:15:06.298 "data_offset": 2048, 00:15:06.298 "data_size": 63488 00:15:06.298 }, 00:15:06.298 { 00:15:06.298 "name": "BaseBdev3", 00:15:06.298 "uuid": "1971141c-2afb-4e9e-a3ad-62637155b812", 00:15:06.298 "is_configured": true, 00:15:06.298 "data_offset": 2048, 00:15:06.298 "data_size": 63488 00:15:06.298 } 00:15:06.298 ] 00:15:06.298 }' 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.298 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.557 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.557 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.557 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:06.557 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.557 08:51:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.557 08:51:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:06.557 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:06.557 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.557 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.557 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.557 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 68e7df18-69c8-44d3-9d46-81195258a1f0 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.817 [2024-10-05 08:51:43.087442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:06.817 [2024-10-05 08:51:43.087648] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:06.817 [2024-10-05 08:51:43.087668] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:06.817 [2024-10-05 08:51:43.087904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:06.817 NewBaseBdev 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.817 [2024-10-05 08:51:43.093055] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:06.817 [2024-10-05 08:51:43.093079] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:06.817 [2024-10-05 08:51:43.093235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.817 [ 00:15:06.817 { 00:15:06.817 "name": "NewBaseBdev", 00:15:06.817 "aliases": [ 00:15:06.817 "68e7df18-69c8-44d3-9d46-81195258a1f0" 00:15:06.817 ], 00:15:06.817 "product_name": "Malloc disk", 00:15:06.817 "block_size": 512, 00:15:06.817 "num_blocks": 65536, 00:15:06.817 "uuid": "68e7df18-69c8-44d3-9d46-81195258a1f0", 00:15:06.817 "assigned_rate_limits": { 00:15:06.817 "rw_ios_per_sec": 0, 00:15:06.817 "rw_mbytes_per_sec": 0, 00:15:06.817 "r_mbytes_per_sec": 0, 00:15:06.817 "w_mbytes_per_sec": 0 00:15:06.817 }, 00:15:06.817 "claimed": true, 00:15:06.817 "claim_type": "exclusive_write", 00:15:06.817 "zoned": false, 00:15:06.817 "supported_io_types": { 00:15:06.817 "read": true, 00:15:06.817 "write": true, 00:15:06.817 "unmap": true, 00:15:06.817 "flush": true, 00:15:06.817 "reset": true, 00:15:06.817 "nvme_admin": false, 00:15:06.817 "nvme_io": false, 00:15:06.817 "nvme_io_md": false, 00:15:06.817 "write_zeroes": true, 00:15:06.817 "zcopy": true, 00:15:06.817 "get_zone_info": false, 00:15:06.817 "zone_management": false, 00:15:06.817 "zone_append": false, 00:15:06.817 "compare": false, 00:15:06.817 "compare_and_write": false, 00:15:06.817 "abort": true, 00:15:06.817 "seek_hole": false, 00:15:06.817 "seek_data": false, 00:15:06.817 "copy": true, 00:15:06.817 "nvme_iov_md": false 00:15:06.817 }, 00:15:06.817 "memory_domains": [ 00:15:06.817 { 00:15:06.817 "dma_device_id": "system", 00:15:06.817 "dma_device_type": 1 00:15:06.817 }, 00:15:06.817 { 00:15:06.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.817 "dma_device_type": 2 00:15:06.817 } 00:15:06.817 ], 00:15:06.817 "driver_specific": {} 00:15:06.817 } 00:15:06.817 ] 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:06.817 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.818 "name": "Existed_Raid", 00:15:06.818 "uuid": "d0140c9e-8d31-4eb7-9994-8285178f1d8b", 00:15:06.818 "strip_size_kb": 64, 00:15:06.818 "state": "online", 00:15:06.818 "raid_level": "raid5f", 00:15:06.818 "superblock": true, 00:15:06.818 "num_base_bdevs": 3, 00:15:06.818 "num_base_bdevs_discovered": 3, 00:15:06.818 "num_base_bdevs_operational": 3, 00:15:06.818 "base_bdevs_list": [ 00:15:06.818 { 00:15:06.818 "name": "NewBaseBdev", 00:15:06.818 "uuid": "68e7df18-69c8-44d3-9d46-81195258a1f0", 00:15:06.818 "is_configured": true, 00:15:06.818 "data_offset": 2048, 00:15:06.818 "data_size": 63488 00:15:06.818 }, 00:15:06.818 { 00:15:06.818 "name": "BaseBdev2", 00:15:06.818 "uuid": "40274f55-329c-494f-b541-5e4d0e93e736", 00:15:06.818 "is_configured": true, 00:15:06.818 "data_offset": 2048, 00:15:06.818 "data_size": 63488 00:15:06.818 }, 00:15:06.818 { 00:15:06.818 "name": "BaseBdev3", 00:15:06.818 "uuid": "1971141c-2afb-4e9e-a3ad-62637155b812", 00:15:06.818 "is_configured": true, 00:15:06.818 "data_offset": 2048, 00:15:06.818 "data_size": 63488 00:15:06.818 } 00:15:06.818 ] 00:15:06.818 }' 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.818 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.386 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:07.386 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:07.386 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:07.386 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:07.386 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:07.386 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:07.386 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:07.386 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:07.386 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.386 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.386 [2024-10-05 08:51:43.594653] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.386 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.386 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:07.386 "name": "Existed_Raid", 00:15:07.386 "aliases": [ 00:15:07.386 "d0140c9e-8d31-4eb7-9994-8285178f1d8b" 00:15:07.386 ], 00:15:07.386 "product_name": "Raid Volume", 00:15:07.386 "block_size": 512, 00:15:07.386 "num_blocks": 126976, 00:15:07.386 "uuid": "d0140c9e-8d31-4eb7-9994-8285178f1d8b", 00:15:07.386 "assigned_rate_limits": { 00:15:07.386 "rw_ios_per_sec": 0, 00:15:07.386 "rw_mbytes_per_sec": 0, 00:15:07.386 "r_mbytes_per_sec": 0, 00:15:07.386 "w_mbytes_per_sec": 0 00:15:07.386 }, 00:15:07.386 "claimed": false, 00:15:07.386 "zoned": false, 00:15:07.386 "supported_io_types": { 00:15:07.386 "read": true, 00:15:07.386 "write": true, 00:15:07.387 "unmap": false, 00:15:07.387 "flush": false, 00:15:07.387 "reset": true, 00:15:07.387 "nvme_admin": false, 00:15:07.387 "nvme_io": false, 00:15:07.387 "nvme_io_md": false, 00:15:07.387 "write_zeroes": true, 00:15:07.387 "zcopy": false, 00:15:07.387 "get_zone_info": false, 00:15:07.387 "zone_management": false, 00:15:07.387 "zone_append": false, 00:15:07.387 "compare": false, 00:15:07.387 "compare_and_write": false, 00:15:07.387 "abort": false, 00:15:07.387 "seek_hole": false, 00:15:07.387 "seek_data": false, 00:15:07.387 "copy": false, 00:15:07.387 "nvme_iov_md": false 00:15:07.387 }, 00:15:07.387 "driver_specific": { 00:15:07.387 "raid": { 00:15:07.387 "uuid": "d0140c9e-8d31-4eb7-9994-8285178f1d8b", 00:15:07.387 "strip_size_kb": 64, 00:15:07.387 "state": "online", 00:15:07.387 "raid_level": "raid5f", 00:15:07.387 "superblock": true, 00:15:07.387 "num_base_bdevs": 3, 00:15:07.387 "num_base_bdevs_discovered": 3, 00:15:07.387 "num_base_bdevs_operational": 3, 00:15:07.387 "base_bdevs_list": [ 00:15:07.387 { 00:15:07.387 "name": "NewBaseBdev", 00:15:07.387 "uuid": "68e7df18-69c8-44d3-9d46-81195258a1f0", 00:15:07.387 "is_configured": true, 00:15:07.387 "data_offset": 2048, 00:15:07.387 "data_size": 63488 00:15:07.387 }, 00:15:07.387 { 00:15:07.387 "name": "BaseBdev2", 00:15:07.387 "uuid": "40274f55-329c-494f-b541-5e4d0e93e736", 00:15:07.387 "is_configured": true, 00:15:07.387 "data_offset": 2048, 00:15:07.387 "data_size": 63488 00:15:07.387 }, 00:15:07.387 { 00:15:07.387 "name": "BaseBdev3", 00:15:07.387 "uuid": "1971141c-2afb-4e9e-a3ad-62637155b812", 00:15:07.387 "is_configured": true, 00:15:07.387 "data_offset": 2048, 00:15:07.387 "data_size": 63488 00:15:07.387 } 00:15:07.387 ] 00:15:07.387 } 00:15:07.387 } 00:15:07.387 }' 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:07.387 BaseBdev2 00:15:07.387 BaseBdev3' 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.387 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.647 [2024-10-05 08:51:43.889997] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:07.647 [2024-10-05 08:51:43.890022] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.647 [2024-10-05 08:51:43.890089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.647 [2024-10-05 08:51:43.890367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.647 [2024-10-05 08:51:43.890387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77700 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77700 ']' 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77700 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77700 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:07.647 killing process with pid 77700 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77700' 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77700 00:15:07.647 [2024-10-05 08:51:43.935416] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.647 08:51:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77700 00:15:07.906 [2024-10-05 08:51:44.213618] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.293 08:51:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:09.293 00:15:09.293 real 0m10.820s 00:15:09.293 user 0m17.171s 00:15:09.293 sys 0m2.039s 00:15:09.293 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:09.293 08:51:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.293 ************************************ 00:15:09.293 END TEST raid5f_state_function_test_sb 00:15:09.293 ************************************ 00:15:09.293 08:51:45 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:09.293 08:51:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:09.293 08:51:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:09.293 08:51:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.293 ************************************ 00:15:09.293 START TEST raid5f_superblock_test 00:15:09.293 ************************************ 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78260 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78260 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 78260 ']' 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.293 08:51:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.293 [2024-10-05 08:51:45.593006] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:15:09.293 [2024-10-05 08:51:45.593591] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78260 ] 00:15:09.293 [2024-10-05 08:51:45.757785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.553 [2024-10-05 08:51:45.953345] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.812 [2024-10-05 08:51:46.137323] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.812 [2024-10-05 08:51:46.137357] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.071 malloc1 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.071 [2024-10-05 08:51:46.441500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:10.071 [2024-10-05 08:51:46.441570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.071 [2024-10-05 08:51:46.441594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:10.071 [2024-10-05 08:51:46.441605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.071 [2024-10-05 08:51:46.443539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.071 [2024-10-05 08:51:46.443574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:10.071 pt1 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.071 malloc2 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.071 [2024-10-05 08:51:46.529953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.071 [2024-10-05 08:51:46.530020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.071 [2024-10-05 08:51:46.530043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:10.071 [2024-10-05 08:51:46.530052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.071 [2024-10-05 08:51:46.531941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.071 [2024-10-05 08:51:46.531987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.071 pt2 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.071 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.331 malloc3 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.331 [2024-10-05 08:51:46.579696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:10.331 [2024-10-05 08:51:46.579747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.331 [2024-10-05 08:51:46.579766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:10.331 [2024-10-05 08:51:46.579774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.331 [2024-10-05 08:51:46.581662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.331 [2024-10-05 08:51:46.581699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:10.331 pt3 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.331 [2024-10-05 08:51:46.591742] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:10.331 [2024-10-05 08:51:46.593434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.331 [2024-10-05 08:51:46.593499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:10.331 [2024-10-05 08:51:46.593658] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:10.331 [2024-10-05 08:51:46.593673] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:10.331 [2024-10-05 08:51:46.593878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:10.331 [2024-10-05 08:51:46.598973] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:10.331 [2024-10-05 08:51:46.598992] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:10.331 [2024-10-05 08:51:46.599156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.331 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.332 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.332 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.332 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.332 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.332 "name": "raid_bdev1", 00:15:10.332 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:10.332 "strip_size_kb": 64, 00:15:10.332 "state": "online", 00:15:10.332 "raid_level": "raid5f", 00:15:10.332 "superblock": true, 00:15:10.332 "num_base_bdevs": 3, 00:15:10.332 "num_base_bdevs_discovered": 3, 00:15:10.332 "num_base_bdevs_operational": 3, 00:15:10.332 "base_bdevs_list": [ 00:15:10.332 { 00:15:10.332 "name": "pt1", 00:15:10.332 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.332 "is_configured": true, 00:15:10.332 "data_offset": 2048, 00:15:10.332 "data_size": 63488 00:15:10.332 }, 00:15:10.332 { 00:15:10.332 "name": "pt2", 00:15:10.332 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.332 "is_configured": true, 00:15:10.332 "data_offset": 2048, 00:15:10.332 "data_size": 63488 00:15:10.332 }, 00:15:10.332 { 00:15:10.332 "name": "pt3", 00:15:10.332 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.332 "is_configured": true, 00:15:10.332 "data_offset": 2048, 00:15:10.332 "data_size": 63488 00:15:10.332 } 00:15:10.332 ] 00:15:10.332 }' 00:15:10.332 08:51:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.332 08:51:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.591 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.591 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:10.591 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.851 [2024-10-05 08:51:47.072146] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:10.851 "name": "raid_bdev1", 00:15:10.851 "aliases": [ 00:15:10.851 "eff55cdf-0e23-4b25-b009-a509f0054f13" 00:15:10.851 ], 00:15:10.851 "product_name": "Raid Volume", 00:15:10.851 "block_size": 512, 00:15:10.851 "num_blocks": 126976, 00:15:10.851 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:10.851 "assigned_rate_limits": { 00:15:10.851 "rw_ios_per_sec": 0, 00:15:10.851 "rw_mbytes_per_sec": 0, 00:15:10.851 "r_mbytes_per_sec": 0, 00:15:10.851 "w_mbytes_per_sec": 0 00:15:10.851 }, 00:15:10.851 "claimed": false, 00:15:10.851 "zoned": false, 00:15:10.851 "supported_io_types": { 00:15:10.851 "read": true, 00:15:10.851 "write": true, 00:15:10.851 "unmap": false, 00:15:10.851 "flush": false, 00:15:10.851 "reset": true, 00:15:10.851 "nvme_admin": false, 00:15:10.851 "nvme_io": false, 00:15:10.851 "nvme_io_md": false, 00:15:10.851 "write_zeroes": true, 00:15:10.851 "zcopy": false, 00:15:10.851 "get_zone_info": false, 00:15:10.851 "zone_management": false, 00:15:10.851 "zone_append": false, 00:15:10.851 "compare": false, 00:15:10.851 "compare_and_write": false, 00:15:10.851 "abort": false, 00:15:10.851 "seek_hole": false, 00:15:10.851 "seek_data": false, 00:15:10.851 "copy": false, 00:15:10.851 "nvme_iov_md": false 00:15:10.851 }, 00:15:10.851 "driver_specific": { 00:15:10.851 "raid": { 00:15:10.851 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:10.851 "strip_size_kb": 64, 00:15:10.851 "state": "online", 00:15:10.851 "raid_level": "raid5f", 00:15:10.851 "superblock": true, 00:15:10.851 "num_base_bdevs": 3, 00:15:10.851 "num_base_bdevs_discovered": 3, 00:15:10.851 "num_base_bdevs_operational": 3, 00:15:10.851 "base_bdevs_list": [ 00:15:10.851 { 00:15:10.851 "name": "pt1", 00:15:10.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.851 "is_configured": true, 00:15:10.851 "data_offset": 2048, 00:15:10.851 "data_size": 63488 00:15:10.851 }, 00:15:10.851 { 00:15:10.851 "name": "pt2", 00:15:10.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.851 "is_configured": true, 00:15:10.851 "data_offset": 2048, 00:15:10.851 "data_size": 63488 00:15:10.851 }, 00:15:10.851 { 00:15:10.851 "name": "pt3", 00:15:10.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.851 "is_configured": true, 00:15:10.851 "data_offset": 2048, 00:15:10.851 "data_size": 63488 00:15:10.851 } 00:15:10.851 ] 00:15:10.851 } 00:15:10.851 } 00:15:10.851 }' 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:10.851 pt2 00:15:10.851 pt3' 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.851 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.112 [2024-10-05 08:51:47.347612] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eff55cdf-0e23-4b25-b009-a509f0054f13 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eff55cdf-0e23-4b25-b009-a509f0054f13 ']' 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.112 [2024-10-05 08:51:47.395374] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.112 [2024-10-05 08:51:47.395400] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.112 [2024-10-05 08:51:47.395456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.112 [2024-10-05 08:51:47.395513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.112 [2024-10-05 08:51:47.395523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.112 [2024-10-05 08:51:47.531176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:11.112 [2024-10-05 08:51:47.533038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:11.112 [2024-10-05 08:51:47.533127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:11.112 [2024-10-05 08:51:47.533192] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:11.112 [2024-10-05 08:51:47.533275] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:11.112 [2024-10-05 08:51:47.533343] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:11.112 [2024-10-05 08:51:47.533401] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.112 [2024-10-05 08:51:47.533434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:11.112 request: 00:15:11.112 { 00:15:11.112 "name": "raid_bdev1", 00:15:11.112 "raid_level": "raid5f", 00:15:11.112 "base_bdevs": [ 00:15:11.112 "malloc1", 00:15:11.112 "malloc2", 00:15:11.112 "malloc3" 00:15:11.112 ], 00:15:11.112 "strip_size_kb": 64, 00:15:11.112 "superblock": false, 00:15:11.112 "method": "bdev_raid_create", 00:15:11.112 "req_id": 1 00:15:11.112 } 00:15:11.112 Got JSON-RPC error response 00:15:11.112 response: 00:15:11.112 { 00:15:11.112 "code": -17, 00:15:11.112 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:11.112 } 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.112 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.373 [2024-10-05 08:51:47.599044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:11.373 [2024-10-05 08:51:47.599136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.373 [2024-10-05 08:51:47.599169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:11.373 [2024-10-05 08:51:47.599196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.373 [2024-10-05 08:51:47.601259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.373 [2024-10-05 08:51:47.601328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:11.373 [2024-10-05 08:51:47.601415] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:11.373 [2024-10-05 08:51:47.601473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:11.373 pt1 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.373 "name": "raid_bdev1", 00:15:11.373 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:11.373 "strip_size_kb": 64, 00:15:11.373 "state": "configuring", 00:15:11.373 "raid_level": "raid5f", 00:15:11.373 "superblock": true, 00:15:11.373 "num_base_bdevs": 3, 00:15:11.373 "num_base_bdevs_discovered": 1, 00:15:11.373 "num_base_bdevs_operational": 3, 00:15:11.373 "base_bdevs_list": [ 00:15:11.373 { 00:15:11.373 "name": "pt1", 00:15:11.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.373 "is_configured": true, 00:15:11.373 "data_offset": 2048, 00:15:11.373 "data_size": 63488 00:15:11.373 }, 00:15:11.373 { 00:15:11.373 "name": null, 00:15:11.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.373 "is_configured": false, 00:15:11.373 "data_offset": 2048, 00:15:11.373 "data_size": 63488 00:15:11.373 }, 00:15:11.373 { 00:15:11.373 "name": null, 00:15:11.373 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.373 "is_configured": false, 00:15:11.373 "data_offset": 2048, 00:15:11.373 "data_size": 63488 00:15:11.373 } 00:15:11.373 ] 00:15:11.373 }' 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.373 08:51:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.633 [2024-10-05 08:51:48.010329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:11.633 [2024-10-05 08:51:48.010383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.633 [2024-10-05 08:51:48.010402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:11.633 [2024-10-05 08:51:48.010410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.633 [2024-10-05 08:51:48.010739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.633 [2024-10-05 08:51:48.010753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:11.633 [2024-10-05 08:51:48.010813] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:11.633 [2024-10-05 08:51:48.010828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.633 pt2 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.633 [2024-10-05 08:51:48.022356] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.633 "name": "raid_bdev1", 00:15:11.633 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:11.633 "strip_size_kb": 64, 00:15:11.633 "state": "configuring", 00:15:11.633 "raid_level": "raid5f", 00:15:11.633 "superblock": true, 00:15:11.633 "num_base_bdevs": 3, 00:15:11.633 "num_base_bdevs_discovered": 1, 00:15:11.633 "num_base_bdevs_operational": 3, 00:15:11.633 "base_bdevs_list": [ 00:15:11.633 { 00:15:11.633 "name": "pt1", 00:15:11.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.633 "is_configured": true, 00:15:11.633 "data_offset": 2048, 00:15:11.633 "data_size": 63488 00:15:11.633 }, 00:15:11.633 { 00:15:11.633 "name": null, 00:15:11.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.633 "is_configured": false, 00:15:11.633 "data_offset": 0, 00:15:11.633 "data_size": 63488 00:15:11.633 }, 00:15:11.633 { 00:15:11.633 "name": null, 00:15:11.633 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.633 "is_configured": false, 00:15:11.633 "data_offset": 2048, 00:15:11.633 "data_size": 63488 00:15:11.633 } 00:15:11.633 ] 00:15:11.633 }' 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.633 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.201 [2024-10-05 08:51:48.521439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:12.201 [2024-10-05 08:51:48.521544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.201 [2024-10-05 08:51:48.521574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:12.201 [2024-10-05 08:51:48.521604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.201 [2024-10-05 08:51:48.521988] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.201 [2024-10-05 08:51:48.522053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:12.201 [2024-10-05 08:51:48.522139] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:12.201 [2024-10-05 08:51:48.522187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:12.201 pt2 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.201 [2024-10-05 08:51:48.533445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:12.201 [2024-10-05 08:51:48.533531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.201 [2024-10-05 08:51:48.533547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:12.201 [2024-10-05 08:51:48.533557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.201 [2024-10-05 08:51:48.533871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.201 [2024-10-05 08:51:48.533892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:12.201 [2024-10-05 08:51:48.533941] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:12.201 [2024-10-05 08:51:48.533973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:12.201 [2024-10-05 08:51:48.534073] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:12.201 [2024-10-05 08:51:48.534084] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:12.201 [2024-10-05 08:51:48.534305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:12.201 [2024-10-05 08:51:48.539500] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:12.201 pt3 00:15:12.201 [2024-10-05 08:51:48.539564] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:12.201 [2024-10-05 08:51:48.539724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.201 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.201 "name": "raid_bdev1", 00:15:12.201 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:12.201 "strip_size_kb": 64, 00:15:12.201 "state": "online", 00:15:12.201 "raid_level": "raid5f", 00:15:12.201 "superblock": true, 00:15:12.201 "num_base_bdevs": 3, 00:15:12.201 "num_base_bdevs_discovered": 3, 00:15:12.201 "num_base_bdevs_operational": 3, 00:15:12.201 "base_bdevs_list": [ 00:15:12.201 { 00:15:12.201 "name": "pt1", 00:15:12.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:12.201 "is_configured": true, 00:15:12.201 "data_offset": 2048, 00:15:12.201 "data_size": 63488 00:15:12.202 }, 00:15:12.202 { 00:15:12.202 "name": "pt2", 00:15:12.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.202 "is_configured": true, 00:15:12.202 "data_offset": 2048, 00:15:12.202 "data_size": 63488 00:15:12.202 }, 00:15:12.202 { 00:15:12.202 "name": "pt3", 00:15:12.202 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.202 "is_configured": true, 00:15:12.202 "data_offset": 2048, 00:15:12.202 "data_size": 63488 00:15:12.202 } 00:15:12.202 ] 00:15:12.202 }' 00:15:12.202 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.202 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.770 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:12.770 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:12.770 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:12.770 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:12.770 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:12.770 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:12.770 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:12.770 08:51:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:12.770 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.770 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.770 [2024-10-05 08:51:48.965171] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.770 08:51:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.770 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:12.770 "name": "raid_bdev1", 00:15:12.770 "aliases": [ 00:15:12.770 "eff55cdf-0e23-4b25-b009-a509f0054f13" 00:15:12.770 ], 00:15:12.770 "product_name": "Raid Volume", 00:15:12.770 "block_size": 512, 00:15:12.770 "num_blocks": 126976, 00:15:12.770 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:12.770 "assigned_rate_limits": { 00:15:12.770 "rw_ios_per_sec": 0, 00:15:12.770 "rw_mbytes_per_sec": 0, 00:15:12.770 "r_mbytes_per_sec": 0, 00:15:12.770 "w_mbytes_per_sec": 0 00:15:12.770 }, 00:15:12.770 "claimed": false, 00:15:12.770 "zoned": false, 00:15:12.770 "supported_io_types": { 00:15:12.770 "read": true, 00:15:12.770 "write": true, 00:15:12.770 "unmap": false, 00:15:12.770 "flush": false, 00:15:12.770 "reset": true, 00:15:12.770 "nvme_admin": false, 00:15:12.770 "nvme_io": false, 00:15:12.770 "nvme_io_md": false, 00:15:12.770 "write_zeroes": true, 00:15:12.771 "zcopy": false, 00:15:12.771 "get_zone_info": false, 00:15:12.771 "zone_management": false, 00:15:12.771 "zone_append": false, 00:15:12.771 "compare": false, 00:15:12.771 "compare_and_write": false, 00:15:12.771 "abort": false, 00:15:12.771 "seek_hole": false, 00:15:12.771 "seek_data": false, 00:15:12.771 "copy": false, 00:15:12.771 "nvme_iov_md": false 00:15:12.771 }, 00:15:12.771 "driver_specific": { 00:15:12.771 "raid": { 00:15:12.771 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:12.771 "strip_size_kb": 64, 00:15:12.771 "state": "online", 00:15:12.771 "raid_level": "raid5f", 00:15:12.771 "superblock": true, 00:15:12.771 "num_base_bdevs": 3, 00:15:12.771 "num_base_bdevs_discovered": 3, 00:15:12.771 "num_base_bdevs_operational": 3, 00:15:12.771 "base_bdevs_list": [ 00:15:12.771 { 00:15:12.771 "name": "pt1", 00:15:12.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:12.771 "is_configured": true, 00:15:12.771 "data_offset": 2048, 00:15:12.771 "data_size": 63488 00:15:12.771 }, 00:15:12.771 { 00:15:12.771 "name": "pt2", 00:15:12.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.771 "is_configured": true, 00:15:12.771 "data_offset": 2048, 00:15:12.771 "data_size": 63488 00:15:12.771 }, 00:15:12.771 { 00:15:12.771 "name": "pt3", 00:15:12.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.771 "is_configured": true, 00:15:12.771 "data_offset": 2048, 00:15:12.771 "data_size": 63488 00:15:12.771 } 00:15:12.771 ] 00:15:12.771 } 00:15:12.771 } 00:15:12.771 }' 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:12.771 pt2 00:15:12.771 pt3' 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.771 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:13.031 [2024-10-05 08:51:49.244612] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eff55cdf-0e23-4b25-b009-a509f0054f13 '!=' eff55cdf-0e23-4b25-b009-a509f0054f13 ']' 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.031 [2024-10-05 08:51:49.296417] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.031 "name": "raid_bdev1", 00:15:13.031 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:13.031 "strip_size_kb": 64, 00:15:13.031 "state": "online", 00:15:13.031 "raid_level": "raid5f", 00:15:13.031 "superblock": true, 00:15:13.031 "num_base_bdevs": 3, 00:15:13.031 "num_base_bdevs_discovered": 2, 00:15:13.031 "num_base_bdevs_operational": 2, 00:15:13.031 "base_bdevs_list": [ 00:15:13.031 { 00:15:13.031 "name": null, 00:15:13.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.031 "is_configured": false, 00:15:13.031 "data_offset": 0, 00:15:13.031 "data_size": 63488 00:15:13.031 }, 00:15:13.031 { 00:15:13.031 "name": "pt2", 00:15:13.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.031 "is_configured": true, 00:15:13.031 "data_offset": 2048, 00:15:13.031 "data_size": 63488 00:15:13.031 }, 00:15:13.031 { 00:15:13.031 "name": "pt3", 00:15:13.031 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.031 "is_configured": true, 00:15:13.031 "data_offset": 2048, 00:15:13.031 "data_size": 63488 00:15:13.031 } 00:15:13.031 ] 00:15:13.031 }' 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.031 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.290 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:13.290 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.290 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.290 [2024-10-05 08:51:49.731642] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.290 [2024-10-05 08:51:49.731714] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.290 [2024-10-05 08:51:49.731783] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.290 [2024-10-05 08:51:49.731841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.290 [2024-10-05 08:51:49.731877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:13.290 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.290 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.290 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.290 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.290 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:13.290 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.549 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:13.549 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:13.549 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.550 [2024-10-05 08:51:49.799512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:13.550 [2024-10-05 08:51:49.799560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.550 [2024-10-05 08:51:49.799573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:13.550 [2024-10-05 08:51:49.799583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.550 [2024-10-05 08:51:49.801594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.550 [2024-10-05 08:51:49.801635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:13.550 [2024-10-05 08:51:49.801692] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:13.550 [2024-10-05 08:51:49.801733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.550 pt2 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.550 "name": "raid_bdev1", 00:15:13.550 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:13.550 "strip_size_kb": 64, 00:15:13.550 "state": "configuring", 00:15:13.550 "raid_level": "raid5f", 00:15:13.550 "superblock": true, 00:15:13.550 "num_base_bdevs": 3, 00:15:13.550 "num_base_bdevs_discovered": 1, 00:15:13.550 "num_base_bdevs_operational": 2, 00:15:13.550 "base_bdevs_list": [ 00:15:13.550 { 00:15:13.550 "name": null, 00:15:13.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.550 "is_configured": false, 00:15:13.550 "data_offset": 2048, 00:15:13.550 "data_size": 63488 00:15:13.550 }, 00:15:13.550 { 00:15:13.550 "name": "pt2", 00:15:13.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.550 "is_configured": true, 00:15:13.550 "data_offset": 2048, 00:15:13.550 "data_size": 63488 00:15:13.550 }, 00:15:13.550 { 00:15:13.550 "name": null, 00:15:13.550 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.550 "is_configured": false, 00:15:13.550 "data_offset": 2048, 00:15:13.550 "data_size": 63488 00:15:13.550 } 00:15:13.550 ] 00:15:13.550 }' 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.550 08:51:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.119 [2024-10-05 08:51:50.294666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:14.119 [2024-10-05 08:51:50.294762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.119 [2024-10-05 08:51:50.294795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:14.119 [2024-10-05 08:51:50.294823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.119 [2024-10-05 08:51:50.295173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.119 [2024-10-05 08:51:50.295230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:14.119 [2024-10-05 08:51:50.295309] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:14.119 [2024-10-05 08:51:50.295363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:14.119 [2024-10-05 08:51:50.295476] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:14.119 [2024-10-05 08:51:50.295515] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:14.119 [2024-10-05 08:51:50.295720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:14.119 [2024-10-05 08:51:50.300800] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:14.119 [2024-10-05 08:51:50.300851] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:14.119 [2024-10-05 08:51:50.301217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.119 pt3 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.119 "name": "raid_bdev1", 00:15:14.119 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:14.119 "strip_size_kb": 64, 00:15:14.119 "state": "online", 00:15:14.119 "raid_level": "raid5f", 00:15:14.119 "superblock": true, 00:15:14.119 "num_base_bdevs": 3, 00:15:14.119 "num_base_bdevs_discovered": 2, 00:15:14.119 "num_base_bdevs_operational": 2, 00:15:14.119 "base_bdevs_list": [ 00:15:14.119 { 00:15:14.119 "name": null, 00:15:14.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.119 "is_configured": false, 00:15:14.119 "data_offset": 2048, 00:15:14.119 "data_size": 63488 00:15:14.119 }, 00:15:14.119 { 00:15:14.119 "name": "pt2", 00:15:14.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.119 "is_configured": true, 00:15:14.119 "data_offset": 2048, 00:15:14.119 "data_size": 63488 00:15:14.119 }, 00:15:14.119 { 00:15:14.119 "name": "pt3", 00:15:14.119 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:14.119 "is_configured": true, 00:15:14.119 "data_offset": 2048, 00:15:14.119 "data_size": 63488 00:15:14.119 } 00:15:14.119 ] 00:15:14.119 }' 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.119 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.379 [2024-10-05 08:51:50.742511] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.379 [2024-10-05 08:51:50.742538] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.379 [2024-10-05 08:51:50.742587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.379 [2024-10-05 08:51:50.742637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.379 [2024-10-05 08:51:50.742645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.379 [2024-10-05 08:51:50.814413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:14.379 [2024-10-05 08:51:50.814465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.379 [2024-10-05 08:51:50.814481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:14.379 [2024-10-05 08:51:50.814489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.379 [2024-10-05 08:51:50.816631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.379 [2024-10-05 08:51:50.816667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:14.379 [2024-10-05 08:51:50.816728] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:14.379 [2024-10-05 08:51:50.816772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:14.379 [2024-10-05 08:51:50.816880] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:14.379 [2024-10-05 08:51:50.816893] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.379 [2024-10-05 08:51:50.816907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:14.379 [2024-10-05 08:51:50.816992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:14.379 pt1 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.379 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.380 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.380 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.380 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.380 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.380 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.380 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.380 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.380 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.380 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.638 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.638 "name": "raid_bdev1", 00:15:14.638 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:14.638 "strip_size_kb": 64, 00:15:14.638 "state": "configuring", 00:15:14.638 "raid_level": "raid5f", 00:15:14.638 "superblock": true, 00:15:14.638 "num_base_bdevs": 3, 00:15:14.638 "num_base_bdevs_discovered": 1, 00:15:14.638 "num_base_bdevs_operational": 2, 00:15:14.638 "base_bdevs_list": [ 00:15:14.638 { 00:15:14.638 "name": null, 00:15:14.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.638 "is_configured": false, 00:15:14.638 "data_offset": 2048, 00:15:14.638 "data_size": 63488 00:15:14.638 }, 00:15:14.638 { 00:15:14.638 "name": "pt2", 00:15:14.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.638 "is_configured": true, 00:15:14.638 "data_offset": 2048, 00:15:14.638 "data_size": 63488 00:15:14.638 }, 00:15:14.638 { 00:15:14.638 "name": null, 00:15:14.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:14.638 "is_configured": false, 00:15:14.638 "data_offset": 2048, 00:15:14.638 "data_size": 63488 00:15:14.638 } 00:15:14.638 ] 00:15:14.638 }' 00:15:14.638 08:51:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.638 08:51:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.898 [2024-10-05 08:51:51.325528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:14.898 [2024-10-05 08:51:51.325623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.898 [2024-10-05 08:51:51.325658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:14.898 [2024-10-05 08:51:51.325686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.898 [2024-10-05 08:51:51.326104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.898 [2024-10-05 08:51:51.326163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:14.898 [2024-10-05 08:51:51.326262] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:14.898 [2024-10-05 08:51:51.326308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:14.898 [2024-10-05 08:51:51.326434] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:14.898 [2024-10-05 08:51:51.326470] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:14.898 [2024-10-05 08:51:51.326748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:14.898 [2024-10-05 08:51:51.331952] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:14.898 [2024-10-05 08:51:51.332016] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:14.898 [2024-10-05 08:51:51.332257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.898 pt3 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.898 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.158 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.158 "name": "raid_bdev1", 00:15:15.158 "uuid": "eff55cdf-0e23-4b25-b009-a509f0054f13", 00:15:15.158 "strip_size_kb": 64, 00:15:15.158 "state": "online", 00:15:15.158 "raid_level": "raid5f", 00:15:15.158 "superblock": true, 00:15:15.158 "num_base_bdevs": 3, 00:15:15.158 "num_base_bdevs_discovered": 2, 00:15:15.158 "num_base_bdevs_operational": 2, 00:15:15.158 "base_bdevs_list": [ 00:15:15.158 { 00:15:15.158 "name": null, 00:15:15.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.158 "is_configured": false, 00:15:15.158 "data_offset": 2048, 00:15:15.158 "data_size": 63488 00:15:15.158 }, 00:15:15.158 { 00:15:15.158 "name": "pt2", 00:15:15.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.158 "is_configured": true, 00:15:15.158 "data_offset": 2048, 00:15:15.158 "data_size": 63488 00:15:15.158 }, 00:15:15.158 { 00:15:15.158 "name": "pt3", 00:15:15.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.158 "is_configured": true, 00:15:15.158 "data_offset": 2048, 00:15:15.158 "data_size": 63488 00:15:15.158 } 00:15:15.158 ] 00:15:15.158 }' 00:15:15.158 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.158 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.418 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:15.418 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.418 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.418 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:15.418 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.418 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:15.418 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:15.418 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:15.418 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.418 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.418 [2024-10-05 08:51:51.865604] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.418 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.677 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' eff55cdf-0e23-4b25-b009-a509f0054f13 '!=' eff55cdf-0e23-4b25-b009-a509f0054f13 ']' 00:15:15.677 08:51:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78260 00:15:15.677 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 78260 ']' 00:15:15.677 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 78260 00:15:15.677 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:15.677 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:15.677 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78260 00:15:15.677 killing process with pid 78260 00:15:15.677 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:15.677 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:15.677 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78260' 00:15:15.677 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 78260 00:15:15.677 [2024-10-05 08:51:51.934649] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.677 [2024-10-05 08:51:51.934713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.677 [2024-10-05 08:51:51.934758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.677 [2024-10-05 08:51:51.934767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:15.678 08:51:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 78260 00:15:15.937 [2024-10-05 08:51:52.215521] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.319 08:51:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:17.319 00:15:17.319 real 0m7.912s 00:15:17.319 user 0m12.308s 00:15:17.319 sys 0m1.468s 00:15:17.319 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:17.319 ************************************ 00:15:17.319 END TEST raid5f_superblock_test 00:15:17.319 ************************************ 00:15:17.319 08:51:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.319 08:51:53 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:17.319 08:51:53 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:17.319 08:51:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:17.319 08:51:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:17.319 08:51:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:17.319 ************************************ 00:15:17.319 START TEST raid5f_rebuild_test 00:15:17.319 ************************************ 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78665 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:17.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78665 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 78665 ']' 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.319 08:51:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.319 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:17.319 Zero copy mechanism will not be used. 00:15:17.319 [2024-10-05 08:51:53.593925] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:15:17.319 [2024-10-05 08:51:53.594069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78665 ] 00:15:17.319 [2024-10-05 08:51:53.758262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.579 [2024-10-05 08:51:53.953355] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.838 [2024-10-05 08:51:54.148949] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.838 [2024-10-05 08:51:54.149014] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.098 BaseBdev1_malloc 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.098 [2024-10-05 08:51:54.445866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:18.098 [2024-10-05 08:51:54.446041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.098 [2024-10-05 08:51:54.446070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:18.098 [2024-10-05 08:51:54.446084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.098 [2024-10-05 08:51:54.448040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.098 [2024-10-05 08:51:54.448078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.098 BaseBdev1 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.098 BaseBdev2_malloc 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.098 [2024-10-05 08:51:54.507842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:18.098 [2024-10-05 08:51:54.507901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.098 [2024-10-05 08:51:54.507919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:18.098 [2024-10-05 08:51:54.507929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.098 [2024-10-05 08:51:54.509894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.098 [2024-10-05 08:51:54.509937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:18.098 BaseBdev2 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.098 BaseBdev3_malloc 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.098 [2024-10-05 08:51:54.559689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:18.098 [2024-10-05 08:51:54.559741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.098 [2024-10-05 08:51:54.559758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:18.098 [2024-10-05 08:51:54.559768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.098 [2024-10-05 08:51:54.561709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.098 [2024-10-05 08:51:54.561819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:18.098 BaseBdev3 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.098 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.358 spare_malloc 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.358 spare_delay 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.358 [2024-10-05 08:51:54.624937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:18.358 [2024-10-05 08:51:54.625026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.358 [2024-10-05 08:51:54.625045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:18.358 [2024-10-05 08:51:54.625055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.358 [2024-10-05 08:51:54.627011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.358 [2024-10-05 08:51:54.627049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:18.358 spare 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.358 [2024-10-05 08:51:54.637011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:18.358 [2024-10-05 08:51:54.638685] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.358 [2024-10-05 08:51:54.638787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.358 [2024-10-05 08:51:54.638867] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:18.358 [2024-10-05 08:51:54.638876] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:18.358 [2024-10-05 08:51:54.639117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:18.358 [2024-10-05 08:51:54.644324] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:18.358 [2024-10-05 08:51:54.644347] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:18.358 [2024-10-05 08:51:54.644524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.358 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.358 "name": "raid_bdev1", 00:15:18.358 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:18.358 "strip_size_kb": 64, 00:15:18.358 "state": "online", 00:15:18.358 "raid_level": "raid5f", 00:15:18.358 "superblock": false, 00:15:18.358 "num_base_bdevs": 3, 00:15:18.358 "num_base_bdevs_discovered": 3, 00:15:18.358 "num_base_bdevs_operational": 3, 00:15:18.358 "base_bdevs_list": [ 00:15:18.358 { 00:15:18.358 "name": "BaseBdev1", 00:15:18.358 "uuid": "e3fd6917-ca33-5c4f-8055-bf9724fc1945", 00:15:18.358 "is_configured": true, 00:15:18.358 "data_offset": 0, 00:15:18.358 "data_size": 65536 00:15:18.358 }, 00:15:18.358 { 00:15:18.358 "name": "BaseBdev2", 00:15:18.358 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:18.358 "is_configured": true, 00:15:18.358 "data_offset": 0, 00:15:18.358 "data_size": 65536 00:15:18.358 }, 00:15:18.358 { 00:15:18.358 "name": "BaseBdev3", 00:15:18.359 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:18.359 "is_configured": true, 00:15:18.359 "data_offset": 0, 00:15:18.359 "data_size": 65536 00:15:18.359 } 00:15:18.359 ] 00:15:18.359 }' 00:15:18.359 08:51:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.359 08:51:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.618 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:18.618 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:18.618 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.618 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.618 [2024-10-05 08:51:55.085877] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:18.877 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:18.877 [2024-10-05 08:51:55.337353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:19.137 /dev/nbd0 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:19.137 1+0 records in 00:15:19.137 1+0 records out 00:15:19.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283021 s, 14.5 MB/s 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:19.137 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:19.706 512+0 records in 00:15:19.706 512+0 records out 00:15:19.706 67108864 bytes (67 MB, 64 MiB) copied, 0.543995 s, 123 MB/s 00:15:19.706 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:19.706 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:19.706 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:19.706 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:19.706 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:19.706 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:19.706 08:51:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:19.706 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:19.706 [2024-10-05 08:51:56.171625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.706 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:19.706 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:19.706 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.706 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.706 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.965 [2024-10-05 08:51:56.190216] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.965 "name": "raid_bdev1", 00:15:19.965 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:19.965 "strip_size_kb": 64, 00:15:19.965 "state": "online", 00:15:19.965 "raid_level": "raid5f", 00:15:19.965 "superblock": false, 00:15:19.965 "num_base_bdevs": 3, 00:15:19.965 "num_base_bdevs_discovered": 2, 00:15:19.965 "num_base_bdevs_operational": 2, 00:15:19.965 "base_bdevs_list": [ 00:15:19.965 { 00:15:19.965 "name": null, 00:15:19.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.965 "is_configured": false, 00:15:19.965 "data_offset": 0, 00:15:19.965 "data_size": 65536 00:15:19.965 }, 00:15:19.965 { 00:15:19.965 "name": "BaseBdev2", 00:15:19.965 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:19.965 "is_configured": true, 00:15:19.965 "data_offset": 0, 00:15:19.965 "data_size": 65536 00:15:19.965 }, 00:15:19.965 { 00:15:19.965 "name": "BaseBdev3", 00:15:19.965 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:19.965 "is_configured": true, 00:15:19.965 "data_offset": 0, 00:15:19.965 "data_size": 65536 00:15:19.965 } 00:15:19.965 ] 00:15:19.965 }' 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.965 08:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.225 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:20.225 08:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.225 08:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.225 [2024-10-05 08:51:56.617547] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.225 [2024-10-05 08:51:56.633241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:20.225 08:51:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.225 08:51:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:20.225 [2024-10-05 08:51:56.640254] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.604 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.604 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.604 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.604 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.604 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.604 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.604 08:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.604 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.604 08:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.604 08:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.604 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.604 "name": "raid_bdev1", 00:15:21.604 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:21.604 "strip_size_kb": 64, 00:15:21.605 "state": "online", 00:15:21.605 "raid_level": "raid5f", 00:15:21.605 "superblock": false, 00:15:21.605 "num_base_bdevs": 3, 00:15:21.605 "num_base_bdevs_discovered": 3, 00:15:21.605 "num_base_bdevs_operational": 3, 00:15:21.605 "process": { 00:15:21.605 "type": "rebuild", 00:15:21.605 "target": "spare", 00:15:21.605 "progress": { 00:15:21.605 "blocks": 20480, 00:15:21.605 "percent": 15 00:15:21.605 } 00:15:21.605 }, 00:15:21.605 "base_bdevs_list": [ 00:15:21.605 { 00:15:21.605 "name": "spare", 00:15:21.605 "uuid": "a5197182-1cde-5365-888d-05f21c7f5aa9", 00:15:21.605 "is_configured": true, 00:15:21.605 "data_offset": 0, 00:15:21.605 "data_size": 65536 00:15:21.605 }, 00:15:21.605 { 00:15:21.605 "name": "BaseBdev2", 00:15:21.605 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:21.605 "is_configured": true, 00:15:21.605 "data_offset": 0, 00:15:21.605 "data_size": 65536 00:15:21.605 }, 00:15:21.605 { 00:15:21.605 "name": "BaseBdev3", 00:15:21.605 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:21.605 "is_configured": true, 00:15:21.605 "data_offset": 0, 00:15:21.605 "data_size": 65536 00:15:21.605 } 00:15:21.605 ] 00:15:21.605 }' 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.605 [2024-10-05 08:51:57.799376] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.605 [2024-10-05 08:51:57.847498] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:21.605 [2024-10-05 08:51:57.847596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.605 [2024-10-05 08:51:57.847632] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:21.605 [2024-10-05 08:51:57.847652] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.605 "name": "raid_bdev1", 00:15:21.605 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:21.605 "strip_size_kb": 64, 00:15:21.605 "state": "online", 00:15:21.605 "raid_level": "raid5f", 00:15:21.605 "superblock": false, 00:15:21.605 "num_base_bdevs": 3, 00:15:21.605 "num_base_bdevs_discovered": 2, 00:15:21.605 "num_base_bdevs_operational": 2, 00:15:21.605 "base_bdevs_list": [ 00:15:21.605 { 00:15:21.605 "name": null, 00:15:21.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.605 "is_configured": false, 00:15:21.605 "data_offset": 0, 00:15:21.605 "data_size": 65536 00:15:21.605 }, 00:15:21.605 { 00:15:21.605 "name": "BaseBdev2", 00:15:21.605 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:21.605 "is_configured": true, 00:15:21.605 "data_offset": 0, 00:15:21.605 "data_size": 65536 00:15:21.605 }, 00:15:21.605 { 00:15:21.605 "name": "BaseBdev3", 00:15:21.605 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:21.605 "is_configured": true, 00:15:21.605 "data_offset": 0, 00:15:21.605 "data_size": 65536 00:15:21.605 } 00:15:21.605 ] 00:15:21.605 }' 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.605 08:51:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.864 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.864 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.864 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.864 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.864 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.124 "name": "raid_bdev1", 00:15:22.124 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:22.124 "strip_size_kb": 64, 00:15:22.124 "state": "online", 00:15:22.124 "raid_level": "raid5f", 00:15:22.124 "superblock": false, 00:15:22.124 "num_base_bdevs": 3, 00:15:22.124 "num_base_bdevs_discovered": 2, 00:15:22.124 "num_base_bdevs_operational": 2, 00:15:22.124 "base_bdevs_list": [ 00:15:22.124 { 00:15:22.124 "name": null, 00:15:22.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.124 "is_configured": false, 00:15:22.124 "data_offset": 0, 00:15:22.124 "data_size": 65536 00:15:22.124 }, 00:15:22.124 { 00:15:22.124 "name": "BaseBdev2", 00:15:22.124 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:22.124 "is_configured": true, 00:15:22.124 "data_offset": 0, 00:15:22.124 "data_size": 65536 00:15:22.124 }, 00:15:22.124 { 00:15:22.124 "name": "BaseBdev3", 00:15:22.124 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:22.124 "is_configured": true, 00:15:22.124 "data_offset": 0, 00:15:22.124 "data_size": 65536 00:15:22.124 } 00:15:22.124 ] 00:15:22.124 }' 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.124 [2024-10-05 08:51:58.477338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.124 [2024-10-05 08:51:58.490830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.124 08:51:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:22.124 [2024-10-05 08:51:58.498185] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:23.061 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.061 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.061 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.061 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.061 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.061 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.061 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.061 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.061 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.061 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.324 "name": "raid_bdev1", 00:15:23.324 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:23.324 "strip_size_kb": 64, 00:15:23.324 "state": "online", 00:15:23.324 "raid_level": "raid5f", 00:15:23.324 "superblock": false, 00:15:23.324 "num_base_bdevs": 3, 00:15:23.324 "num_base_bdevs_discovered": 3, 00:15:23.324 "num_base_bdevs_operational": 3, 00:15:23.324 "process": { 00:15:23.324 "type": "rebuild", 00:15:23.324 "target": "spare", 00:15:23.324 "progress": { 00:15:23.324 "blocks": 20480, 00:15:23.324 "percent": 15 00:15:23.324 } 00:15:23.324 }, 00:15:23.324 "base_bdevs_list": [ 00:15:23.324 { 00:15:23.324 "name": "spare", 00:15:23.324 "uuid": "a5197182-1cde-5365-888d-05f21c7f5aa9", 00:15:23.324 "is_configured": true, 00:15:23.324 "data_offset": 0, 00:15:23.324 "data_size": 65536 00:15:23.324 }, 00:15:23.324 { 00:15:23.324 "name": "BaseBdev2", 00:15:23.324 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:23.324 "is_configured": true, 00:15:23.324 "data_offset": 0, 00:15:23.324 "data_size": 65536 00:15:23.324 }, 00:15:23.324 { 00:15:23.324 "name": "BaseBdev3", 00:15:23.324 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:23.324 "is_configured": true, 00:15:23.324 "data_offset": 0, 00:15:23.324 "data_size": 65536 00:15:23.324 } 00:15:23.324 ] 00:15:23.324 }' 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=551 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.324 "name": "raid_bdev1", 00:15:23.324 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:23.324 "strip_size_kb": 64, 00:15:23.324 "state": "online", 00:15:23.324 "raid_level": "raid5f", 00:15:23.324 "superblock": false, 00:15:23.324 "num_base_bdevs": 3, 00:15:23.324 "num_base_bdevs_discovered": 3, 00:15:23.324 "num_base_bdevs_operational": 3, 00:15:23.324 "process": { 00:15:23.324 "type": "rebuild", 00:15:23.324 "target": "spare", 00:15:23.324 "progress": { 00:15:23.324 "blocks": 22528, 00:15:23.324 "percent": 17 00:15:23.324 } 00:15:23.324 }, 00:15:23.324 "base_bdevs_list": [ 00:15:23.324 { 00:15:23.324 "name": "spare", 00:15:23.324 "uuid": "a5197182-1cde-5365-888d-05f21c7f5aa9", 00:15:23.324 "is_configured": true, 00:15:23.324 "data_offset": 0, 00:15:23.324 "data_size": 65536 00:15:23.324 }, 00:15:23.324 { 00:15:23.324 "name": "BaseBdev2", 00:15:23.324 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:23.324 "is_configured": true, 00:15:23.324 "data_offset": 0, 00:15:23.324 "data_size": 65536 00:15:23.324 }, 00:15:23.324 { 00:15:23.324 "name": "BaseBdev3", 00:15:23.324 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:23.324 "is_configured": true, 00:15:23.324 "data_offset": 0, 00:15:23.324 "data_size": 65536 00:15:23.324 } 00:15:23.324 ] 00:15:23.324 }' 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.324 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.599 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.599 08:51:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:24.551 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.551 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.551 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.551 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.551 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.551 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.551 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.551 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.551 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.552 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.552 08:52:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.552 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.552 "name": "raid_bdev1", 00:15:24.552 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:24.552 "strip_size_kb": 64, 00:15:24.552 "state": "online", 00:15:24.552 "raid_level": "raid5f", 00:15:24.552 "superblock": false, 00:15:24.552 "num_base_bdevs": 3, 00:15:24.552 "num_base_bdevs_discovered": 3, 00:15:24.552 "num_base_bdevs_operational": 3, 00:15:24.552 "process": { 00:15:24.552 "type": "rebuild", 00:15:24.552 "target": "spare", 00:15:24.552 "progress": { 00:15:24.552 "blocks": 47104, 00:15:24.552 "percent": 35 00:15:24.552 } 00:15:24.552 }, 00:15:24.552 "base_bdevs_list": [ 00:15:24.552 { 00:15:24.552 "name": "spare", 00:15:24.552 "uuid": "a5197182-1cde-5365-888d-05f21c7f5aa9", 00:15:24.552 "is_configured": true, 00:15:24.552 "data_offset": 0, 00:15:24.552 "data_size": 65536 00:15:24.552 }, 00:15:24.552 { 00:15:24.552 "name": "BaseBdev2", 00:15:24.552 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:24.552 "is_configured": true, 00:15:24.552 "data_offset": 0, 00:15:24.552 "data_size": 65536 00:15:24.552 }, 00:15:24.552 { 00:15:24.552 "name": "BaseBdev3", 00:15:24.552 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:24.552 "is_configured": true, 00:15:24.552 "data_offset": 0, 00:15:24.552 "data_size": 65536 00:15:24.552 } 00:15:24.552 ] 00:15:24.552 }' 00:15:24.552 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.552 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.552 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.552 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.552 08:52:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.931 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.931 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.931 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.931 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.931 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.931 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.931 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.931 08:52:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.931 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.931 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.931 08:52:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.931 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.931 "name": "raid_bdev1", 00:15:25.931 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:25.931 "strip_size_kb": 64, 00:15:25.931 "state": "online", 00:15:25.931 "raid_level": "raid5f", 00:15:25.931 "superblock": false, 00:15:25.931 "num_base_bdevs": 3, 00:15:25.931 "num_base_bdevs_discovered": 3, 00:15:25.931 "num_base_bdevs_operational": 3, 00:15:25.931 "process": { 00:15:25.931 "type": "rebuild", 00:15:25.931 "target": "spare", 00:15:25.931 "progress": { 00:15:25.931 "blocks": 69632, 00:15:25.931 "percent": 53 00:15:25.931 } 00:15:25.931 }, 00:15:25.931 "base_bdevs_list": [ 00:15:25.931 { 00:15:25.931 "name": "spare", 00:15:25.931 "uuid": "a5197182-1cde-5365-888d-05f21c7f5aa9", 00:15:25.931 "is_configured": true, 00:15:25.931 "data_offset": 0, 00:15:25.931 "data_size": 65536 00:15:25.931 }, 00:15:25.931 { 00:15:25.931 "name": "BaseBdev2", 00:15:25.931 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:25.931 "is_configured": true, 00:15:25.931 "data_offset": 0, 00:15:25.931 "data_size": 65536 00:15:25.931 }, 00:15:25.931 { 00:15:25.931 "name": "BaseBdev3", 00:15:25.931 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:25.931 "is_configured": true, 00:15:25.931 "data_offset": 0, 00:15:25.931 "data_size": 65536 00:15:25.931 } 00:15:25.931 ] 00:15:25.931 }' 00:15:25.931 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.931 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.931 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.931 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.931 08:52:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.867 "name": "raid_bdev1", 00:15:26.867 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:26.867 "strip_size_kb": 64, 00:15:26.867 "state": "online", 00:15:26.867 "raid_level": "raid5f", 00:15:26.867 "superblock": false, 00:15:26.867 "num_base_bdevs": 3, 00:15:26.867 "num_base_bdevs_discovered": 3, 00:15:26.867 "num_base_bdevs_operational": 3, 00:15:26.867 "process": { 00:15:26.867 "type": "rebuild", 00:15:26.867 "target": "spare", 00:15:26.867 "progress": { 00:15:26.867 "blocks": 94208, 00:15:26.867 "percent": 71 00:15:26.867 } 00:15:26.867 }, 00:15:26.867 "base_bdevs_list": [ 00:15:26.867 { 00:15:26.867 "name": "spare", 00:15:26.867 "uuid": "a5197182-1cde-5365-888d-05f21c7f5aa9", 00:15:26.867 "is_configured": true, 00:15:26.867 "data_offset": 0, 00:15:26.867 "data_size": 65536 00:15:26.867 }, 00:15:26.867 { 00:15:26.867 "name": "BaseBdev2", 00:15:26.867 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:26.867 "is_configured": true, 00:15:26.867 "data_offset": 0, 00:15:26.867 "data_size": 65536 00:15:26.867 }, 00:15:26.867 { 00:15:26.867 "name": "BaseBdev3", 00:15:26.867 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:26.867 "is_configured": true, 00:15:26.867 "data_offset": 0, 00:15:26.867 "data_size": 65536 00:15:26.867 } 00:15:26.867 ] 00:15:26.867 }' 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.867 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.868 08:52:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.806 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.806 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.806 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.806 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.806 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.806 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.806 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.806 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.806 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.806 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.065 08:52:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.065 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.065 "name": "raid_bdev1", 00:15:28.065 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:28.065 "strip_size_kb": 64, 00:15:28.065 "state": "online", 00:15:28.065 "raid_level": "raid5f", 00:15:28.065 "superblock": false, 00:15:28.065 "num_base_bdevs": 3, 00:15:28.065 "num_base_bdevs_discovered": 3, 00:15:28.065 "num_base_bdevs_operational": 3, 00:15:28.065 "process": { 00:15:28.065 "type": "rebuild", 00:15:28.065 "target": "spare", 00:15:28.065 "progress": { 00:15:28.065 "blocks": 116736, 00:15:28.065 "percent": 89 00:15:28.065 } 00:15:28.065 }, 00:15:28.065 "base_bdevs_list": [ 00:15:28.065 { 00:15:28.065 "name": "spare", 00:15:28.065 "uuid": "a5197182-1cde-5365-888d-05f21c7f5aa9", 00:15:28.065 "is_configured": true, 00:15:28.065 "data_offset": 0, 00:15:28.065 "data_size": 65536 00:15:28.065 }, 00:15:28.065 { 00:15:28.065 "name": "BaseBdev2", 00:15:28.065 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:28.065 "is_configured": true, 00:15:28.065 "data_offset": 0, 00:15:28.065 "data_size": 65536 00:15:28.065 }, 00:15:28.065 { 00:15:28.065 "name": "BaseBdev3", 00:15:28.065 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:28.065 "is_configured": true, 00:15:28.065 "data_offset": 0, 00:15:28.065 "data_size": 65536 00:15:28.065 } 00:15:28.065 ] 00:15:28.065 }' 00:15:28.065 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.065 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.065 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.065 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.065 08:52:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.634 [2024-10-05 08:52:04.932608] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:28.634 [2024-10-05 08:52:04.932732] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:28.634 [2024-10-05 08:52:04.932792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.203 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.203 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.203 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.203 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.203 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.203 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.203 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.203 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.203 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.203 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.203 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.203 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.203 "name": "raid_bdev1", 00:15:29.203 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:29.203 "strip_size_kb": 64, 00:15:29.203 "state": "online", 00:15:29.203 "raid_level": "raid5f", 00:15:29.203 "superblock": false, 00:15:29.203 "num_base_bdevs": 3, 00:15:29.203 "num_base_bdevs_discovered": 3, 00:15:29.203 "num_base_bdevs_operational": 3, 00:15:29.203 "base_bdevs_list": [ 00:15:29.203 { 00:15:29.203 "name": "spare", 00:15:29.203 "uuid": "a5197182-1cde-5365-888d-05f21c7f5aa9", 00:15:29.203 "is_configured": true, 00:15:29.203 "data_offset": 0, 00:15:29.203 "data_size": 65536 00:15:29.203 }, 00:15:29.203 { 00:15:29.203 "name": "BaseBdev2", 00:15:29.203 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:29.203 "is_configured": true, 00:15:29.203 "data_offset": 0, 00:15:29.203 "data_size": 65536 00:15:29.203 }, 00:15:29.203 { 00:15:29.203 "name": "BaseBdev3", 00:15:29.204 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:29.204 "is_configured": true, 00:15:29.204 "data_offset": 0, 00:15:29.204 "data_size": 65536 00:15:29.204 } 00:15:29.204 ] 00:15:29.204 }' 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.204 "name": "raid_bdev1", 00:15:29.204 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:29.204 "strip_size_kb": 64, 00:15:29.204 "state": "online", 00:15:29.204 "raid_level": "raid5f", 00:15:29.204 "superblock": false, 00:15:29.204 "num_base_bdevs": 3, 00:15:29.204 "num_base_bdevs_discovered": 3, 00:15:29.204 "num_base_bdevs_operational": 3, 00:15:29.204 "base_bdevs_list": [ 00:15:29.204 { 00:15:29.204 "name": "spare", 00:15:29.204 "uuid": "a5197182-1cde-5365-888d-05f21c7f5aa9", 00:15:29.204 "is_configured": true, 00:15:29.204 "data_offset": 0, 00:15:29.204 "data_size": 65536 00:15:29.204 }, 00:15:29.204 { 00:15:29.204 "name": "BaseBdev2", 00:15:29.204 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:29.204 "is_configured": true, 00:15:29.204 "data_offset": 0, 00:15:29.204 "data_size": 65536 00:15:29.204 }, 00:15:29.204 { 00:15:29.204 "name": "BaseBdev3", 00:15:29.204 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:29.204 "is_configured": true, 00:15:29.204 "data_offset": 0, 00:15:29.204 "data_size": 65536 00:15:29.204 } 00:15:29.204 ] 00:15:29.204 }' 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.204 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.464 "name": "raid_bdev1", 00:15:29.464 "uuid": "b30710bf-7b42-4c5a-8850-858c8dcd4b5f", 00:15:29.464 "strip_size_kb": 64, 00:15:29.464 "state": "online", 00:15:29.464 "raid_level": "raid5f", 00:15:29.464 "superblock": false, 00:15:29.464 "num_base_bdevs": 3, 00:15:29.464 "num_base_bdevs_discovered": 3, 00:15:29.464 "num_base_bdevs_operational": 3, 00:15:29.464 "base_bdevs_list": [ 00:15:29.464 { 00:15:29.464 "name": "spare", 00:15:29.464 "uuid": "a5197182-1cde-5365-888d-05f21c7f5aa9", 00:15:29.464 "is_configured": true, 00:15:29.464 "data_offset": 0, 00:15:29.464 "data_size": 65536 00:15:29.464 }, 00:15:29.464 { 00:15:29.464 "name": "BaseBdev2", 00:15:29.464 "uuid": "2fecbac9-3013-53ee-abf4-2a3e752ae7d9", 00:15:29.464 "is_configured": true, 00:15:29.464 "data_offset": 0, 00:15:29.464 "data_size": 65536 00:15:29.464 }, 00:15:29.464 { 00:15:29.464 "name": "BaseBdev3", 00:15:29.464 "uuid": "5a204ea2-6f96-5a6d-b5ef-eac6eeab6364", 00:15:29.464 "is_configured": true, 00:15:29.464 "data_offset": 0, 00:15:29.464 "data_size": 65536 00:15:29.464 } 00:15:29.464 ] 00:15:29.464 }' 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.464 08:52:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.724 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:29.724 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.724 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.724 [2024-10-05 08:52:06.137777] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.724 [2024-10-05 08:52:06.137859] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.724 [2024-10-05 08:52:06.137952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.724 [2024-10-05 08:52:06.138054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.724 [2024-10-05 08:52:06.138093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:29.724 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.724 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.725 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.725 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.725 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:29.725 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.725 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:29.725 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:29.725 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:29.725 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:29.985 /dev/nbd0 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.985 1+0 records in 00:15:29.985 1+0 records out 00:15:29.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361495 s, 11.3 MB/s 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:29.985 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:30.245 /dev/nbd1 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.245 1+0 records in 00:15:30.245 1+0 records out 00:15:30.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520573 s, 7.9 MB/s 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:30.245 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:30.505 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:30.505 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:30.505 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:30.505 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:30.505 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:30.505 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.505 08:52:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:30.764 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:30.764 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:30.764 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:30.764 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.764 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.764 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:30.764 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:30.764 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.764 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.764 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78665 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 78665 ']' 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 78665 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78665 00:15:31.024 killing process with pid 78665 00:15:31.024 Received shutdown signal, test time was about 60.000000 seconds 00:15:31.024 00:15:31.024 Latency(us) 00:15:31.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.024 =================================================================================================================== 00:15:31.024 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78665' 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 78665 00:15:31.024 [2024-10-05 08:52:07.340181] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.024 08:52:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 78665 00:15:31.284 [2024-10-05 08:52:07.703653] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:32.664 00:15:32.664 real 0m15.390s 00:15:32.664 user 0m18.716s 00:15:32.664 sys 0m2.313s 00:15:32.664 ************************************ 00:15:32.664 END TEST raid5f_rebuild_test 00:15:32.664 ************************************ 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.664 08:52:08 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:32.664 08:52:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:32.664 08:52:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.664 08:52:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.664 ************************************ 00:15:32.664 START TEST raid5f_rebuild_test_sb 00:15:32.664 ************************************ 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=79015 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 79015 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79015 ']' 00:15:32.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.664 08:52:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.664 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:32.664 Zero copy mechanism will not be used. 00:15:32.664 [2024-10-05 08:52:09.058418] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:15:32.664 [2024-10-05 08:52:09.058553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79015 ] 00:15:32.924 [2024-10-05 08:52:09.223178] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.183 [2024-10-05 08:52:09.408903] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.183 [2024-10-05 08:52:09.599468] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.183 [2024-10-05 08:52:09.599499] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.443 BaseBdev1_malloc 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.443 [2024-10-05 08:52:09.904642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:33.443 [2024-10-05 08:52:09.904705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.443 [2024-10-05 08:52:09.904730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:33.443 [2024-10-05 08:52:09.904743] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.443 [2024-10-05 08:52:09.906686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.443 [2024-10-05 08:52:09.906767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.443 BaseBdev1 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.443 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.703 BaseBdev2_malloc 00:15:33.703 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.703 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:33.703 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.703 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.703 [2024-10-05 08:52:09.985958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:33.703 [2024-10-05 08:52:09.986028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.703 [2024-10-05 08:52:09.986049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:33.703 [2024-10-05 08:52:09.986062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.703 [2024-10-05 08:52:09.987951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.703 [2024-10-05 08:52:09.988000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:33.703 BaseBdev2 00:15:33.703 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.703 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.703 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:33.703 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.703 08:52:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.703 BaseBdev3_malloc 00:15:33.703 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.703 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:33.703 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.703 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.704 [2024-10-05 08:52:10.034592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:33.704 [2024-10-05 08:52:10.034641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.704 [2024-10-05 08:52:10.034662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:33.704 [2024-10-05 08:52:10.034672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.704 [2024-10-05 08:52:10.036614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.704 [2024-10-05 08:52:10.036653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:33.704 BaseBdev3 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.704 spare_malloc 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.704 spare_delay 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.704 [2024-10-05 08:52:10.100299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.704 [2024-10-05 08:52:10.100344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.704 [2024-10-05 08:52:10.100360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:33.704 [2024-10-05 08:52:10.100369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.704 [2024-10-05 08:52:10.102344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.704 [2024-10-05 08:52:10.102385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.704 spare 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.704 [2024-10-05 08:52:10.112360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.704 [2024-10-05 08:52:10.114089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.704 [2024-10-05 08:52:10.114149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.704 [2024-10-05 08:52:10.114313] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:33.704 [2024-10-05 08:52:10.114324] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:33.704 [2024-10-05 08:52:10.114554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:33.704 [2024-10-05 08:52:10.119597] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:33.704 [2024-10-05 08:52:10.119659] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:33.704 [2024-10-05 08:52:10.119849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.704 "name": "raid_bdev1", 00:15:33.704 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:33.704 "strip_size_kb": 64, 00:15:33.704 "state": "online", 00:15:33.704 "raid_level": "raid5f", 00:15:33.704 "superblock": true, 00:15:33.704 "num_base_bdevs": 3, 00:15:33.704 "num_base_bdevs_discovered": 3, 00:15:33.704 "num_base_bdevs_operational": 3, 00:15:33.704 "base_bdevs_list": [ 00:15:33.704 { 00:15:33.704 "name": "BaseBdev1", 00:15:33.704 "uuid": "1704ccf7-5ffa-5148-9192-9a4852740174", 00:15:33.704 "is_configured": true, 00:15:33.704 "data_offset": 2048, 00:15:33.704 "data_size": 63488 00:15:33.704 }, 00:15:33.704 { 00:15:33.704 "name": "BaseBdev2", 00:15:33.704 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:33.704 "is_configured": true, 00:15:33.704 "data_offset": 2048, 00:15:33.704 "data_size": 63488 00:15:33.704 }, 00:15:33.704 { 00:15:33.704 "name": "BaseBdev3", 00:15:33.704 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:33.704 "is_configured": true, 00:15:33.704 "data_offset": 2048, 00:15:33.704 "data_size": 63488 00:15:33.704 } 00:15:33.704 ] 00:15:33.704 }' 00:15:33.704 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.964 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.223 [2024-10-05 08:52:10.569224] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.223 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:34.483 [2024-10-05 08:52:10.848588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:34.483 /dev/nbd0 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.483 1+0 records in 00:15:34.483 1+0 records out 00:15:34.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607063 s, 6.7 MB/s 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:34.483 08:52:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:35.053 496+0 records in 00:15:35.053 496+0 records out 00:15:35.053 65011712 bytes (65 MB, 62 MiB) copied, 0.456531 s, 142 MB/s 00:15:35.053 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:35.053 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.053 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:35.053 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:35.053 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:35.053 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.053 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:35.313 [2024-10-05 08:52:11.599190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.313 [2024-10-05 08:52:11.617860] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.313 "name": "raid_bdev1", 00:15:35.313 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:35.313 "strip_size_kb": 64, 00:15:35.313 "state": "online", 00:15:35.313 "raid_level": "raid5f", 00:15:35.313 "superblock": true, 00:15:35.313 "num_base_bdevs": 3, 00:15:35.313 "num_base_bdevs_discovered": 2, 00:15:35.313 "num_base_bdevs_operational": 2, 00:15:35.313 "base_bdevs_list": [ 00:15:35.313 { 00:15:35.313 "name": null, 00:15:35.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.313 "is_configured": false, 00:15:35.313 "data_offset": 0, 00:15:35.313 "data_size": 63488 00:15:35.313 }, 00:15:35.313 { 00:15:35.313 "name": "BaseBdev2", 00:15:35.313 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:35.313 "is_configured": true, 00:15:35.313 "data_offset": 2048, 00:15:35.313 "data_size": 63488 00:15:35.313 }, 00:15:35.313 { 00:15:35.313 "name": "BaseBdev3", 00:15:35.313 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:35.313 "is_configured": true, 00:15:35.313 "data_offset": 2048, 00:15:35.313 "data_size": 63488 00:15:35.313 } 00:15:35.313 ] 00:15:35.313 }' 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.313 08:52:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.883 08:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:35.883 08:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.883 08:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.883 [2024-10-05 08:52:12.073109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:35.883 [2024-10-05 08:52:12.088953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:35.883 08:52:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.883 08:52:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:35.883 [2024-10-05 08:52:12.096042] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.820 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.820 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.820 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.820 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.820 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.820 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.820 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.820 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.820 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.820 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.820 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.820 "name": "raid_bdev1", 00:15:36.820 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:36.820 "strip_size_kb": 64, 00:15:36.820 "state": "online", 00:15:36.820 "raid_level": "raid5f", 00:15:36.820 "superblock": true, 00:15:36.820 "num_base_bdevs": 3, 00:15:36.820 "num_base_bdevs_discovered": 3, 00:15:36.820 "num_base_bdevs_operational": 3, 00:15:36.820 "process": { 00:15:36.820 "type": "rebuild", 00:15:36.820 "target": "spare", 00:15:36.820 "progress": { 00:15:36.820 "blocks": 20480, 00:15:36.820 "percent": 16 00:15:36.820 } 00:15:36.820 }, 00:15:36.820 "base_bdevs_list": [ 00:15:36.820 { 00:15:36.820 "name": "spare", 00:15:36.820 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:36.820 "is_configured": true, 00:15:36.820 "data_offset": 2048, 00:15:36.820 "data_size": 63488 00:15:36.820 }, 00:15:36.820 { 00:15:36.820 "name": "BaseBdev2", 00:15:36.820 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:36.820 "is_configured": true, 00:15:36.820 "data_offset": 2048, 00:15:36.821 "data_size": 63488 00:15:36.821 }, 00:15:36.821 { 00:15:36.821 "name": "BaseBdev3", 00:15:36.821 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:36.821 "is_configured": true, 00:15:36.821 "data_offset": 2048, 00:15:36.821 "data_size": 63488 00:15:36.821 } 00:15:36.821 ] 00:15:36.821 }' 00:15:36.821 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.821 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.821 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.821 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.821 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:36.821 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.821 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.821 [2024-10-05 08:52:13.210651] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.080 [2024-10-05 08:52:13.303430] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:37.080 [2024-10-05 08:52:13.303485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.080 [2024-10-05 08:52:13.303518] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.080 [2024-10-05 08:52:13.303525] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.080 "name": "raid_bdev1", 00:15:37.080 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:37.080 "strip_size_kb": 64, 00:15:37.080 "state": "online", 00:15:37.080 "raid_level": "raid5f", 00:15:37.080 "superblock": true, 00:15:37.080 "num_base_bdevs": 3, 00:15:37.080 "num_base_bdevs_discovered": 2, 00:15:37.080 "num_base_bdevs_operational": 2, 00:15:37.080 "base_bdevs_list": [ 00:15:37.080 { 00:15:37.080 "name": null, 00:15:37.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.080 "is_configured": false, 00:15:37.080 "data_offset": 0, 00:15:37.080 "data_size": 63488 00:15:37.080 }, 00:15:37.080 { 00:15:37.080 "name": "BaseBdev2", 00:15:37.080 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:37.080 "is_configured": true, 00:15:37.080 "data_offset": 2048, 00:15:37.080 "data_size": 63488 00:15:37.080 }, 00:15:37.080 { 00:15:37.080 "name": "BaseBdev3", 00:15:37.080 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:37.080 "is_configured": true, 00:15:37.080 "data_offset": 2048, 00:15:37.080 "data_size": 63488 00:15:37.080 } 00:15:37.080 ] 00:15:37.080 }' 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.080 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.340 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.340 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.340 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.340 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.340 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.340 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.340 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.340 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.340 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.340 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.340 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.340 "name": "raid_bdev1", 00:15:37.340 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:37.340 "strip_size_kb": 64, 00:15:37.340 "state": "online", 00:15:37.340 "raid_level": "raid5f", 00:15:37.340 "superblock": true, 00:15:37.340 "num_base_bdevs": 3, 00:15:37.340 "num_base_bdevs_discovered": 2, 00:15:37.340 "num_base_bdevs_operational": 2, 00:15:37.340 "base_bdevs_list": [ 00:15:37.340 { 00:15:37.340 "name": null, 00:15:37.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.340 "is_configured": false, 00:15:37.340 "data_offset": 0, 00:15:37.340 "data_size": 63488 00:15:37.340 }, 00:15:37.340 { 00:15:37.340 "name": "BaseBdev2", 00:15:37.340 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:37.340 "is_configured": true, 00:15:37.340 "data_offset": 2048, 00:15:37.340 "data_size": 63488 00:15:37.340 }, 00:15:37.340 { 00:15:37.340 "name": "BaseBdev3", 00:15:37.340 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:37.340 "is_configured": true, 00:15:37.340 "data_offset": 2048, 00:15:37.340 "data_size": 63488 00:15:37.340 } 00:15:37.340 ] 00:15:37.340 }' 00:15:37.340 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.599 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.599 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.599 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.599 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:37.599 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.599 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.599 [2024-10-05 08:52:13.906149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.599 [2024-10-05 08:52:13.921014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:37.599 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.599 08:52:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:37.599 [2024-10-05 08:52:13.928668] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:38.549 08:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.549 08:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.549 08:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.549 08:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.549 08:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.549 08:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.549 08:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.549 08:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.549 08:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.549 08:52:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.549 08:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.549 "name": "raid_bdev1", 00:15:38.549 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:38.549 "strip_size_kb": 64, 00:15:38.549 "state": "online", 00:15:38.549 "raid_level": "raid5f", 00:15:38.549 "superblock": true, 00:15:38.549 "num_base_bdevs": 3, 00:15:38.549 "num_base_bdevs_discovered": 3, 00:15:38.549 "num_base_bdevs_operational": 3, 00:15:38.549 "process": { 00:15:38.549 "type": "rebuild", 00:15:38.549 "target": "spare", 00:15:38.549 "progress": { 00:15:38.549 "blocks": 20480, 00:15:38.549 "percent": 16 00:15:38.549 } 00:15:38.549 }, 00:15:38.549 "base_bdevs_list": [ 00:15:38.549 { 00:15:38.549 "name": "spare", 00:15:38.549 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:38.549 "is_configured": true, 00:15:38.549 "data_offset": 2048, 00:15:38.549 "data_size": 63488 00:15:38.549 }, 00:15:38.549 { 00:15:38.549 "name": "BaseBdev2", 00:15:38.549 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:38.549 "is_configured": true, 00:15:38.549 "data_offset": 2048, 00:15:38.549 "data_size": 63488 00:15:38.549 }, 00:15:38.549 { 00:15:38.549 "name": "BaseBdev3", 00:15:38.549 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:38.549 "is_configured": true, 00:15:38.549 "data_offset": 2048, 00:15:38.549 "data_size": 63488 00:15:38.549 } 00:15:38.549 ] 00:15:38.549 }' 00:15:38.549 08:52:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:38.856 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=567 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.856 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.856 "name": "raid_bdev1", 00:15:38.856 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:38.856 "strip_size_kb": 64, 00:15:38.856 "state": "online", 00:15:38.856 "raid_level": "raid5f", 00:15:38.856 "superblock": true, 00:15:38.856 "num_base_bdevs": 3, 00:15:38.856 "num_base_bdevs_discovered": 3, 00:15:38.856 "num_base_bdevs_operational": 3, 00:15:38.856 "process": { 00:15:38.856 "type": "rebuild", 00:15:38.856 "target": "spare", 00:15:38.856 "progress": { 00:15:38.856 "blocks": 22528, 00:15:38.856 "percent": 17 00:15:38.856 } 00:15:38.856 }, 00:15:38.856 "base_bdevs_list": [ 00:15:38.856 { 00:15:38.856 "name": "spare", 00:15:38.856 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:38.856 "is_configured": true, 00:15:38.856 "data_offset": 2048, 00:15:38.856 "data_size": 63488 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "name": "BaseBdev2", 00:15:38.856 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:38.856 "is_configured": true, 00:15:38.856 "data_offset": 2048, 00:15:38.856 "data_size": 63488 00:15:38.856 }, 00:15:38.856 { 00:15:38.856 "name": "BaseBdev3", 00:15:38.856 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:38.856 "is_configured": true, 00:15:38.856 "data_offset": 2048, 00:15:38.856 "data_size": 63488 00:15:38.856 } 00:15:38.856 ] 00:15:38.856 }' 00:15:38.857 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.857 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.857 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.857 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.857 08:52:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.795 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.795 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.795 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.795 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.795 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.795 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.795 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.795 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.795 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.795 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.795 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.054 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.054 "name": "raid_bdev1", 00:15:40.054 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:40.054 "strip_size_kb": 64, 00:15:40.054 "state": "online", 00:15:40.054 "raid_level": "raid5f", 00:15:40.054 "superblock": true, 00:15:40.054 "num_base_bdevs": 3, 00:15:40.054 "num_base_bdevs_discovered": 3, 00:15:40.054 "num_base_bdevs_operational": 3, 00:15:40.054 "process": { 00:15:40.054 "type": "rebuild", 00:15:40.054 "target": "spare", 00:15:40.054 "progress": { 00:15:40.054 "blocks": 45056, 00:15:40.054 "percent": 35 00:15:40.054 } 00:15:40.054 }, 00:15:40.054 "base_bdevs_list": [ 00:15:40.054 { 00:15:40.054 "name": "spare", 00:15:40.054 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:40.054 "is_configured": true, 00:15:40.054 "data_offset": 2048, 00:15:40.054 "data_size": 63488 00:15:40.054 }, 00:15:40.054 { 00:15:40.054 "name": "BaseBdev2", 00:15:40.054 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:40.054 "is_configured": true, 00:15:40.054 "data_offset": 2048, 00:15:40.054 "data_size": 63488 00:15:40.054 }, 00:15:40.054 { 00:15:40.054 "name": "BaseBdev3", 00:15:40.054 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:40.054 "is_configured": true, 00:15:40.054 "data_offset": 2048, 00:15:40.054 "data_size": 63488 00:15:40.054 } 00:15:40.054 ] 00:15:40.054 }' 00:15:40.054 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.054 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.054 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.054 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.055 08:52:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.990 "name": "raid_bdev1", 00:15:40.990 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:40.990 "strip_size_kb": 64, 00:15:40.990 "state": "online", 00:15:40.990 "raid_level": "raid5f", 00:15:40.990 "superblock": true, 00:15:40.990 "num_base_bdevs": 3, 00:15:40.990 "num_base_bdevs_discovered": 3, 00:15:40.990 "num_base_bdevs_operational": 3, 00:15:40.990 "process": { 00:15:40.990 "type": "rebuild", 00:15:40.990 "target": "spare", 00:15:40.990 "progress": { 00:15:40.990 "blocks": 69632, 00:15:40.990 "percent": 54 00:15:40.990 } 00:15:40.990 }, 00:15:40.990 "base_bdevs_list": [ 00:15:40.990 { 00:15:40.990 "name": "spare", 00:15:40.990 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:40.990 "is_configured": true, 00:15:40.990 "data_offset": 2048, 00:15:40.990 "data_size": 63488 00:15:40.990 }, 00:15:40.990 { 00:15:40.990 "name": "BaseBdev2", 00:15:40.990 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:40.990 "is_configured": true, 00:15:40.990 "data_offset": 2048, 00:15:40.990 "data_size": 63488 00:15:40.990 }, 00:15:40.990 { 00:15:40.990 "name": "BaseBdev3", 00:15:40.990 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:40.990 "is_configured": true, 00:15:40.990 "data_offset": 2048, 00:15:40.990 "data_size": 63488 00:15:40.990 } 00:15:40.990 ] 00:15:40.990 }' 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.990 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.248 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.248 08:52:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.184 "name": "raid_bdev1", 00:15:42.184 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:42.184 "strip_size_kb": 64, 00:15:42.184 "state": "online", 00:15:42.184 "raid_level": "raid5f", 00:15:42.184 "superblock": true, 00:15:42.184 "num_base_bdevs": 3, 00:15:42.184 "num_base_bdevs_discovered": 3, 00:15:42.184 "num_base_bdevs_operational": 3, 00:15:42.184 "process": { 00:15:42.184 "type": "rebuild", 00:15:42.184 "target": "spare", 00:15:42.184 "progress": { 00:15:42.184 "blocks": 92160, 00:15:42.184 "percent": 72 00:15:42.184 } 00:15:42.184 }, 00:15:42.184 "base_bdevs_list": [ 00:15:42.184 { 00:15:42.184 "name": "spare", 00:15:42.184 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:42.184 "is_configured": true, 00:15:42.184 "data_offset": 2048, 00:15:42.184 "data_size": 63488 00:15:42.184 }, 00:15:42.184 { 00:15:42.184 "name": "BaseBdev2", 00:15:42.184 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:42.184 "is_configured": true, 00:15:42.184 "data_offset": 2048, 00:15:42.184 "data_size": 63488 00:15:42.184 }, 00:15:42.184 { 00:15:42.184 "name": "BaseBdev3", 00:15:42.184 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:42.184 "is_configured": true, 00:15:42.184 "data_offset": 2048, 00:15:42.184 "data_size": 63488 00:15:42.184 } 00:15:42.184 ] 00:15:42.184 }' 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.184 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.444 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.444 08:52:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.383 "name": "raid_bdev1", 00:15:43.383 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:43.383 "strip_size_kb": 64, 00:15:43.383 "state": "online", 00:15:43.383 "raid_level": "raid5f", 00:15:43.383 "superblock": true, 00:15:43.383 "num_base_bdevs": 3, 00:15:43.383 "num_base_bdevs_discovered": 3, 00:15:43.383 "num_base_bdevs_operational": 3, 00:15:43.383 "process": { 00:15:43.383 "type": "rebuild", 00:15:43.383 "target": "spare", 00:15:43.383 "progress": { 00:15:43.383 "blocks": 116736, 00:15:43.383 "percent": 91 00:15:43.383 } 00:15:43.383 }, 00:15:43.383 "base_bdevs_list": [ 00:15:43.383 { 00:15:43.383 "name": "spare", 00:15:43.383 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:43.383 "is_configured": true, 00:15:43.383 "data_offset": 2048, 00:15:43.383 "data_size": 63488 00:15:43.383 }, 00:15:43.383 { 00:15:43.383 "name": "BaseBdev2", 00:15:43.383 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:43.383 "is_configured": true, 00:15:43.383 "data_offset": 2048, 00:15:43.383 "data_size": 63488 00:15:43.383 }, 00:15:43.383 { 00:15:43.383 "name": "BaseBdev3", 00:15:43.383 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:43.383 "is_configured": true, 00:15:43.383 "data_offset": 2048, 00:15:43.383 "data_size": 63488 00:15:43.383 } 00:15:43.383 ] 00:15:43.383 }' 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.383 08:52:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.953 [2024-10-05 08:52:20.162544] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:43.953 [2024-10-05 08:52:20.162666] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:43.953 [2024-10-05 08:52:20.162768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.522 "name": "raid_bdev1", 00:15:44.522 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:44.522 "strip_size_kb": 64, 00:15:44.522 "state": "online", 00:15:44.522 "raid_level": "raid5f", 00:15:44.522 "superblock": true, 00:15:44.522 "num_base_bdevs": 3, 00:15:44.522 "num_base_bdevs_discovered": 3, 00:15:44.522 "num_base_bdevs_operational": 3, 00:15:44.522 "base_bdevs_list": [ 00:15:44.522 { 00:15:44.522 "name": "spare", 00:15:44.522 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:44.522 "is_configured": true, 00:15:44.522 "data_offset": 2048, 00:15:44.522 "data_size": 63488 00:15:44.522 }, 00:15:44.522 { 00:15:44.522 "name": "BaseBdev2", 00:15:44.522 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:44.522 "is_configured": true, 00:15:44.522 "data_offset": 2048, 00:15:44.522 "data_size": 63488 00:15:44.522 }, 00:15:44.522 { 00:15:44.522 "name": "BaseBdev3", 00:15:44.522 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:44.522 "is_configured": true, 00:15:44.522 "data_offset": 2048, 00:15:44.522 "data_size": 63488 00:15:44.522 } 00:15:44.522 ] 00:15:44.522 }' 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.522 "name": "raid_bdev1", 00:15:44.522 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:44.522 "strip_size_kb": 64, 00:15:44.522 "state": "online", 00:15:44.522 "raid_level": "raid5f", 00:15:44.522 "superblock": true, 00:15:44.522 "num_base_bdevs": 3, 00:15:44.522 "num_base_bdevs_discovered": 3, 00:15:44.522 "num_base_bdevs_operational": 3, 00:15:44.522 "base_bdevs_list": [ 00:15:44.522 { 00:15:44.522 "name": "spare", 00:15:44.522 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:44.522 "is_configured": true, 00:15:44.522 "data_offset": 2048, 00:15:44.522 "data_size": 63488 00:15:44.522 }, 00:15:44.522 { 00:15:44.522 "name": "BaseBdev2", 00:15:44.522 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:44.522 "is_configured": true, 00:15:44.522 "data_offset": 2048, 00:15:44.522 "data_size": 63488 00:15:44.522 }, 00:15:44.522 { 00:15:44.522 "name": "BaseBdev3", 00:15:44.522 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:44.522 "is_configured": true, 00:15:44.522 "data_offset": 2048, 00:15:44.522 "data_size": 63488 00:15:44.522 } 00:15:44.522 ] 00:15:44.522 }' 00:15:44.522 08:52:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.782 "name": "raid_bdev1", 00:15:44.782 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:44.782 "strip_size_kb": 64, 00:15:44.782 "state": "online", 00:15:44.782 "raid_level": "raid5f", 00:15:44.782 "superblock": true, 00:15:44.782 "num_base_bdevs": 3, 00:15:44.782 "num_base_bdevs_discovered": 3, 00:15:44.782 "num_base_bdevs_operational": 3, 00:15:44.782 "base_bdevs_list": [ 00:15:44.782 { 00:15:44.782 "name": "spare", 00:15:44.782 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:44.782 "is_configured": true, 00:15:44.782 "data_offset": 2048, 00:15:44.782 "data_size": 63488 00:15:44.782 }, 00:15:44.782 { 00:15:44.782 "name": "BaseBdev2", 00:15:44.782 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:44.782 "is_configured": true, 00:15:44.782 "data_offset": 2048, 00:15:44.782 "data_size": 63488 00:15:44.782 }, 00:15:44.782 { 00:15:44.782 "name": "BaseBdev3", 00:15:44.782 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:44.782 "is_configured": true, 00:15:44.782 "data_offset": 2048, 00:15:44.782 "data_size": 63488 00:15:44.782 } 00:15:44.782 ] 00:15:44.782 }' 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.782 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.351 [2024-10-05 08:52:21.535353] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:45.351 [2024-10-05 08:52:21.535383] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.351 [2024-10-05 08:52:21.535455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.351 [2024-10-05 08:52:21.535524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.351 [2024-10-05 08:52:21.535538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:45.351 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.352 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:45.352 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:45.352 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:45.352 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:45.352 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:45.352 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:45.352 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.352 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:45.352 /dev/nbd0 00:15:45.352 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.611 1+0 records in 00:15:45.611 1+0 records out 00:15:45.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061733 s, 6.6 MB/s 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.611 08:52:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:45.611 /dev/nbd1 00:15:45.611 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.871 1+0 records in 00:15:45.871 1+0 records out 00:15:45.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561875 s, 7.3 MB/s 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.871 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:45.872 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.872 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:45.872 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.872 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:46.131 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:46.131 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:46.131 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:46.131 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.131 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.131 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:46.131 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:46.131 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.131 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:46.131 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.392 [2024-10-05 08:52:22.728196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:46.392 [2024-10-05 08:52:22.728253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.392 [2024-10-05 08:52:22.728274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:46.392 [2024-10-05 08:52:22.728284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.392 [2024-10-05 08:52:22.730579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.392 [2024-10-05 08:52:22.730624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:46.392 [2024-10-05 08:52:22.730709] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:46.392 [2024-10-05 08:52:22.730770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.392 [2024-10-05 08:52:22.730914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.392 [2024-10-05 08:52:22.731028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.392 spare 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.392 [2024-10-05 08:52:22.830918] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:46.392 [2024-10-05 08:52:22.830999] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:46.392 [2024-10-05 08:52:22.831256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:46.392 [2024-10-05 08:52:22.836435] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:46.392 [2024-10-05 08:52:22.836455] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:46.392 [2024-10-05 08:52:22.836614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.392 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.652 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.652 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.652 "name": "raid_bdev1", 00:15:46.652 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:46.652 "strip_size_kb": 64, 00:15:46.652 "state": "online", 00:15:46.652 "raid_level": "raid5f", 00:15:46.652 "superblock": true, 00:15:46.652 "num_base_bdevs": 3, 00:15:46.652 "num_base_bdevs_discovered": 3, 00:15:46.652 "num_base_bdevs_operational": 3, 00:15:46.652 "base_bdevs_list": [ 00:15:46.652 { 00:15:46.652 "name": "spare", 00:15:46.652 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:46.652 "is_configured": true, 00:15:46.652 "data_offset": 2048, 00:15:46.652 "data_size": 63488 00:15:46.652 }, 00:15:46.652 { 00:15:46.652 "name": "BaseBdev2", 00:15:46.652 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:46.652 "is_configured": true, 00:15:46.652 "data_offset": 2048, 00:15:46.652 "data_size": 63488 00:15:46.652 }, 00:15:46.652 { 00:15:46.652 "name": "BaseBdev3", 00:15:46.652 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:46.652 "is_configured": true, 00:15:46.652 "data_offset": 2048, 00:15:46.652 "data_size": 63488 00:15:46.652 } 00:15:46.652 ] 00:15:46.652 }' 00:15:46.652 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.652 08:52:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.912 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.912 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.912 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.912 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.912 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.912 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.912 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.912 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.912 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.912 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.912 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.912 "name": "raid_bdev1", 00:15:46.912 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:46.912 "strip_size_kb": 64, 00:15:46.912 "state": "online", 00:15:46.912 "raid_level": "raid5f", 00:15:46.912 "superblock": true, 00:15:46.912 "num_base_bdevs": 3, 00:15:46.912 "num_base_bdevs_discovered": 3, 00:15:46.912 "num_base_bdevs_operational": 3, 00:15:46.912 "base_bdevs_list": [ 00:15:46.912 { 00:15:46.912 "name": "spare", 00:15:46.912 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:46.912 "is_configured": true, 00:15:46.912 "data_offset": 2048, 00:15:46.912 "data_size": 63488 00:15:46.912 }, 00:15:46.912 { 00:15:46.912 "name": "BaseBdev2", 00:15:46.912 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:46.912 "is_configured": true, 00:15:46.912 "data_offset": 2048, 00:15:46.912 "data_size": 63488 00:15:46.912 }, 00:15:46.912 { 00:15:46.912 "name": "BaseBdev3", 00:15:46.912 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:46.912 "is_configured": true, 00:15:46.912 "data_offset": 2048, 00:15:46.912 "data_size": 63488 00:15:46.912 } 00:15:46.913 ] 00:15:46.913 }' 00:15:46.913 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.172 [2024-10-05 08:52:23.481731] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.172 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.173 "name": "raid_bdev1", 00:15:47.173 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:47.173 "strip_size_kb": 64, 00:15:47.173 "state": "online", 00:15:47.173 "raid_level": "raid5f", 00:15:47.173 "superblock": true, 00:15:47.173 "num_base_bdevs": 3, 00:15:47.173 "num_base_bdevs_discovered": 2, 00:15:47.173 "num_base_bdevs_operational": 2, 00:15:47.173 "base_bdevs_list": [ 00:15:47.173 { 00:15:47.173 "name": null, 00:15:47.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.173 "is_configured": false, 00:15:47.173 "data_offset": 0, 00:15:47.173 "data_size": 63488 00:15:47.173 }, 00:15:47.173 { 00:15:47.173 "name": "BaseBdev2", 00:15:47.173 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:47.173 "is_configured": true, 00:15:47.173 "data_offset": 2048, 00:15:47.173 "data_size": 63488 00:15:47.173 }, 00:15:47.173 { 00:15:47.173 "name": "BaseBdev3", 00:15:47.173 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:47.173 "is_configured": true, 00:15:47.173 "data_offset": 2048, 00:15:47.173 "data_size": 63488 00:15:47.173 } 00:15:47.173 ] 00:15:47.173 }' 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.173 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.741 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:47.741 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.741 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.741 [2024-10-05 08:52:23.960998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.741 [2024-10-05 08:52:23.961192] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:47.741 [2024-10-05 08:52:23.961256] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:47.741 [2024-10-05 08:52:23.961311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:47.741 [2024-10-05 08:52:23.976036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:47.741 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.741 08:52:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:47.741 [2024-10-05 08:52:23.983027] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:48.678 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.678 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.678 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.678 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.678 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.678 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.678 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.679 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.679 08:52:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.679 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.679 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.679 "name": "raid_bdev1", 00:15:48.679 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:48.679 "strip_size_kb": 64, 00:15:48.679 "state": "online", 00:15:48.679 "raid_level": "raid5f", 00:15:48.679 "superblock": true, 00:15:48.679 "num_base_bdevs": 3, 00:15:48.679 "num_base_bdevs_discovered": 3, 00:15:48.679 "num_base_bdevs_operational": 3, 00:15:48.679 "process": { 00:15:48.679 "type": "rebuild", 00:15:48.679 "target": "spare", 00:15:48.679 "progress": { 00:15:48.679 "blocks": 20480, 00:15:48.679 "percent": 16 00:15:48.679 } 00:15:48.679 }, 00:15:48.679 "base_bdevs_list": [ 00:15:48.679 { 00:15:48.679 "name": "spare", 00:15:48.679 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:48.679 "is_configured": true, 00:15:48.679 "data_offset": 2048, 00:15:48.679 "data_size": 63488 00:15:48.679 }, 00:15:48.679 { 00:15:48.679 "name": "BaseBdev2", 00:15:48.679 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:48.679 "is_configured": true, 00:15:48.679 "data_offset": 2048, 00:15:48.679 "data_size": 63488 00:15:48.679 }, 00:15:48.679 { 00:15:48.679 "name": "BaseBdev3", 00:15:48.679 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:48.679 "is_configured": true, 00:15:48.679 "data_offset": 2048, 00:15:48.679 "data_size": 63488 00:15:48.679 } 00:15:48.679 ] 00:15:48.679 }' 00:15:48.679 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.679 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:48.679 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.679 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.679 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:48.679 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.679 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.679 [2024-10-05 08:52:25.118283] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.939 [2024-10-05 08:52:25.190327] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:48.939 [2024-10-05 08:52:25.190397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.939 [2024-10-05 08:52:25.190422] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.939 [2024-10-05 08:52:25.190431] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.939 "name": "raid_bdev1", 00:15:48.939 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:48.939 "strip_size_kb": 64, 00:15:48.939 "state": "online", 00:15:48.939 "raid_level": "raid5f", 00:15:48.939 "superblock": true, 00:15:48.939 "num_base_bdevs": 3, 00:15:48.939 "num_base_bdevs_discovered": 2, 00:15:48.939 "num_base_bdevs_operational": 2, 00:15:48.939 "base_bdevs_list": [ 00:15:48.939 { 00:15:48.939 "name": null, 00:15:48.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.939 "is_configured": false, 00:15:48.939 "data_offset": 0, 00:15:48.939 "data_size": 63488 00:15:48.939 }, 00:15:48.939 { 00:15:48.939 "name": "BaseBdev2", 00:15:48.939 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:48.939 "is_configured": true, 00:15:48.939 "data_offset": 2048, 00:15:48.939 "data_size": 63488 00:15:48.939 }, 00:15:48.939 { 00:15:48.939 "name": "BaseBdev3", 00:15:48.939 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:48.939 "is_configured": true, 00:15:48.939 "data_offset": 2048, 00:15:48.939 "data_size": 63488 00:15:48.939 } 00:15:48.939 ] 00:15:48.939 }' 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.939 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.508 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:49.508 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.508 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.508 [2024-10-05 08:52:25.692636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:49.508 [2024-10-05 08:52:25.692739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.508 [2024-10-05 08:52:25.692776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:49.508 [2024-10-05 08:52:25.692809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.508 [2024-10-05 08:52:25.693322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.508 [2024-10-05 08:52:25.693386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:49.508 [2024-10-05 08:52:25.693497] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:49.508 [2024-10-05 08:52:25.693542] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:49.508 [2024-10-05 08:52:25.693586] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:49.508 [2024-10-05 08:52:25.693665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.508 [2024-10-05 08:52:25.707646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:49.508 spare 00:15:49.508 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.508 08:52:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:49.508 [2024-10-05 08:52:25.714807] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.447 "name": "raid_bdev1", 00:15:50.447 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:50.447 "strip_size_kb": 64, 00:15:50.447 "state": "online", 00:15:50.447 "raid_level": "raid5f", 00:15:50.447 "superblock": true, 00:15:50.447 "num_base_bdevs": 3, 00:15:50.447 "num_base_bdevs_discovered": 3, 00:15:50.447 "num_base_bdevs_operational": 3, 00:15:50.447 "process": { 00:15:50.447 "type": "rebuild", 00:15:50.447 "target": "spare", 00:15:50.447 "progress": { 00:15:50.447 "blocks": 20480, 00:15:50.447 "percent": 16 00:15:50.447 } 00:15:50.447 }, 00:15:50.447 "base_bdevs_list": [ 00:15:50.447 { 00:15:50.447 "name": "spare", 00:15:50.447 "uuid": "87d524aa-32b9-5556-b28e-434c6c205202", 00:15:50.447 "is_configured": true, 00:15:50.447 "data_offset": 2048, 00:15:50.447 "data_size": 63488 00:15:50.447 }, 00:15:50.447 { 00:15:50.447 "name": "BaseBdev2", 00:15:50.447 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:50.447 "is_configured": true, 00:15:50.447 "data_offset": 2048, 00:15:50.447 "data_size": 63488 00:15:50.447 }, 00:15:50.447 { 00:15:50.447 "name": "BaseBdev3", 00:15:50.447 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:50.447 "is_configured": true, 00:15:50.447 "data_offset": 2048, 00:15:50.447 "data_size": 63488 00:15:50.447 } 00:15:50.447 ] 00:15:50.447 }' 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.447 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.447 [2024-10-05 08:52:26.869958] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.707 [2024-10-05 08:52:26.922062] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:50.707 [2024-10-05 08:52:26.922113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.707 [2024-10-05 08:52:26.922130] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.707 [2024-10-05 08:52:26.922137] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:50.707 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.707 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:50.707 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.708 08:52:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.708 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.708 "name": "raid_bdev1", 00:15:50.708 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:50.708 "strip_size_kb": 64, 00:15:50.708 "state": "online", 00:15:50.708 "raid_level": "raid5f", 00:15:50.708 "superblock": true, 00:15:50.708 "num_base_bdevs": 3, 00:15:50.708 "num_base_bdevs_discovered": 2, 00:15:50.708 "num_base_bdevs_operational": 2, 00:15:50.708 "base_bdevs_list": [ 00:15:50.708 { 00:15:50.708 "name": null, 00:15:50.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.708 "is_configured": false, 00:15:50.708 "data_offset": 0, 00:15:50.708 "data_size": 63488 00:15:50.708 }, 00:15:50.708 { 00:15:50.708 "name": "BaseBdev2", 00:15:50.708 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:50.708 "is_configured": true, 00:15:50.708 "data_offset": 2048, 00:15:50.708 "data_size": 63488 00:15:50.708 }, 00:15:50.708 { 00:15:50.708 "name": "BaseBdev3", 00:15:50.708 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:50.708 "is_configured": true, 00:15:50.708 "data_offset": 2048, 00:15:50.708 "data_size": 63488 00:15:50.708 } 00:15:50.708 ] 00:15:50.708 }' 00:15:50.708 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.708 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.968 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.968 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.968 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.968 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.968 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.968 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.968 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.968 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.968 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.968 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.968 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.968 "name": "raid_bdev1", 00:15:50.968 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:50.968 "strip_size_kb": 64, 00:15:50.968 "state": "online", 00:15:50.968 "raid_level": "raid5f", 00:15:50.968 "superblock": true, 00:15:50.968 "num_base_bdevs": 3, 00:15:50.968 "num_base_bdevs_discovered": 2, 00:15:50.968 "num_base_bdevs_operational": 2, 00:15:50.968 "base_bdevs_list": [ 00:15:50.968 { 00:15:50.968 "name": null, 00:15:50.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.968 "is_configured": false, 00:15:50.968 "data_offset": 0, 00:15:50.968 "data_size": 63488 00:15:50.968 }, 00:15:50.968 { 00:15:50.968 "name": "BaseBdev2", 00:15:50.968 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:50.968 "is_configured": true, 00:15:50.968 "data_offset": 2048, 00:15:50.968 "data_size": 63488 00:15:50.968 }, 00:15:50.968 { 00:15:50.968 "name": "BaseBdev3", 00:15:50.968 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:50.968 "is_configured": true, 00:15:50.968 "data_offset": 2048, 00:15:50.968 "data_size": 63488 00:15:50.968 } 00:15:50.968 ] 00:15:50.968 }' 00:15:50.968 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.229 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.229 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.229 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.229 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:51.229 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.229 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.229 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.229 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:51.229 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.229 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.229 [2024-10-05 08:52:27.528516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:51.229 [2024-10-05 08:52:27.528610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.229 [2024-10-05 08:52:27.528652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:51.229 [2024-10-05 08:52:27.528681] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.229 [2024-10-05 08:52:27.529169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.229 [2024-10-05 08:52:27.529228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:51.229 [2024-10-05 08:52:27.529331] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:51.229 [2024-10-05 08:52:27.529375] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:51.229 [2024-10-05 08:52:27.529421] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:51.229 [2024-10-05 08:52:27.529454] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:51.229 BaseBdev1 00:15:51.229 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.229 08:52:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.169 "name": "raid_bdev1", 00:15:52.169 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:52.169 "strip_size_kb": 64, 00:15:52.169 "state": "online", 00:15:52.169 "raid_level": "raid5f", 00:15:52.169 "superblock": true, 00:15:52.169 "num_base_bdevs": 3, 00:15:52.169 "num_base_bdevs_discovered": 2, 00:15:52.169 "num_base_bdevs_operational": 2, 00:15:52.169 "base_bdevs_list": [ 00:15:52.169 { 00:15:52.169 "name": null, 00:15:52.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.169 "is_configured": false, 00:15:52.169 "data_offset": 0, 00:15:52.169 "data_size": 63488 00:15:52.169 }, 00:15:52.169 { 00:15:52.169 "name": "BaseBdev2", 00:15:52.169 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:52.169 "is_configured": true, 00:15:52.169 "data_offset": 2048, 00:15:52.169 "data_size": 63488 00:15:52.169 }, 00:15:52.169 { 00:15:52.169 "name": "BaseBdev3", 00:15:52.169 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:52.169 "is_configured": true, 00:15:52.169 "data_offset": 2048, 00:15:52.169 "data_size": 63488 00:15:52.169 } 00:15:52.169 ] 00:15:52.169 }' 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.169 08:52:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.739 "name": "raid_bdev1", 00:15:52.739 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:52.739 "strip_size_kb": 64, 00:15:52.739 "state": "online", 00:15:52.739 "raid_level": "raid5f", 00:15:52.739 "superblock": true, 00:15:52.739 "num_base_bdevs": 3, 00:15:52.739 "num_base_bdevs_discovered": 2, 00:15:52.739 "num_base_bdevs_operational": 2, 00:15:52.739 "base_bdevs_list": [ 00:15:52.739 { 00:15:52.739 "name": null, 00:15:52.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.739 "is_configured": false, 00:15:52.739 "data_offset": 0, 00:15:52.739 "data_size": 63488 00:15:52.739 }, 00:15:52.739 { 00:15:52.739 "name": "BaseBdev2", 00:15:52.739 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:52.739 "is_configured": true, 00:15:52.739 "data_offset": 2048, 00:15:52.739 "data_size": 63488 00:15:52.739 }, 00:15:52.739 { 00:15:52.739 "name": "BaseBdev3", 00:15:52.739 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:52.739 "is_configured": true, 00:15:52.739 "data_offset": 2048, 00:15:52.739 "data_size": 63488 00:15:52.739 } 00:15:52.739 ] 00:15:52.739 }' 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.739 [2024-10-05 08:52:29.197660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.739 [2024-10-05 08:52:29.197846] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:52.739 [2024-10-05 08:52:29.197905] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:52.739 request: 00:15:52.739 { 00:15:52.739 "base_bdev": "BaseBdev1", 00:15:52.739 "raid_bdev": "raid_bdev1", 00:15:52.739 "method": "bdev_raid_add_base_bdev", 00:15:52.739 "req_id": 1 00:15:52.739 } 00:15:52.739 Got JSON-RPC error response 00:15:52.739 response: 00:15:52.739 { 00:15:52.739 "code": -22, 00:15:52.739 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:52.739 } 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:52.739 08:52:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:54.129 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.130 "name": "raid_bdev1", 00:15:54.130 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:54.130 "strip_size_kb": 64, 00:15:54.130 "state": "online", 00:15:54.130 "raid_level": "raid5f", 00:15:54.130 "superblock": true, 00:15:54.130 "num_base_bdevs": 3, 00:15:54.130 "num_base_bdevs_discovered": 2, 00:15:54.130 "num_base_bdevs_operational": 2, 00:15:54.130 "base_bdevs_list": [ 00:15:54.130 { 00:15:54.130 "name": null, 00:15:54.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.130 "is_configured": false, 00:15:54.130 "data_offset": 0, 00:15:54.130 "data_size": 63488 00:15:54.130 }, 00:15:54.130 { 00:15:54.130 "name": "BaseBdev2", 00:15:54.130 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:54.130 "is_configured": true, 00:15:54.130 "data_offset": 2048, 00:15:54.130 "data_size": 63488 00:15:54.130 }, 00:15:54.130 { 00:15:54.130 "name": "BaseBdev3", 00:15:54.130 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:54.130 "is_configured": true, 00:15:54.130 "data_offset": 2048, 00:15:54.130 "data_size": 63488 00:15:54.130 } 00:15:54.130 ] 00:15:54.130 }' 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.130 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.407 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.407 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.407 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.407 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.407 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.407 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.407 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.407 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.407 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.408 "name": "raid_bdev1", 00:15:54.408 "uuid": "771086a8-a590-448c-a11b-94beb415eebb", 00:15:54.408 "strip_size_kb": 64, 00:15:54.408 "state": "online", 00:15:54.408 "raid_level": "raid5f", 00:15:54.408 "superblock": true, 00:15:54.408 "num_base_bdevs": 3, 00:15:54.408 "num_base_bdevs_discovered": 2, 00:15:54.408 "num_base_bdevs_operational": 2, 00:15:54.408 "base_bdevs_list": [ 00:15:54.408 { 00:15:54.408 "name": null, 00:15:54.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.408 "is_configured": false, 00:15:54.408 "data_offset": 0, 00:15:54.408 "data_size": 63488 00:15:54.408 }, 00:15:54.408 { 00:15:54.408 "name": "BaseBdev2", 00:15:54.408 "uuid": "a2e3ab5a-074b-5ef0-bff8-fc3224014927", 00:15:54.408 "is_configured": true, 00:15:54.408 "data_offset": 2048, 00:15:54.408 "data_size": 63488 00:15:54.408 }, 00:15:54.408 { 00:15:54.408 "name": "BaseBdev3", 00:15:54.408 "uuid": "c40bd1d3-b0bd-5eec-853f-87c00417e426", 00:15:54.408 "is_configured": true, 00:15:54.408 "data_offset": 2048, 00:15:54.408 "data_size": 63488 00:15:54.408 } 00:15:54.408 ] 00:15:54.408 }' 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 79015 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79015 ']' 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 79015 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79015 00:15:54.408 killing process with pid 79015 00:15:54.408 Received shutdown signal, test time was about 60.000000 seconds 00:15:54.408 00:15:54.408 Latency(us) 00:15:54.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.408 =================================================================================================================== 00:15:54.408 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79015' 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 79015 00:15:54.408 [2024-10-05 08:52:30.814519] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.408 [2024-10-05 08:52:30.814627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.408 08:52:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 79015 00:15:54.408 [2024-10-05 08:52:30.814684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.408 [2024-10-05 08:52:30.814697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:54.978 [2024-10-05 08:52:31.186818] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.918 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:55.918 00:15:55.918 real 0m23.409s 00:15:55.918 user 0m29.830s 00:15:55.918 sys 0m3.027s 00:15:55.918 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.918 08:52:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.918 ************************************ 00:15:55.918 END TEST raid5f_rebuild_test_sb 00:15:55.918 ************************************ 00:15:56.179 08:52:32 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:56.179 08:52:32 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:56.179 08:52:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:56.179 08:52:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.179 08:52:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:56.179 ************************************ 00:15:56.179 START TEST raid5f_state_function_test 00:15:56.179 ************************************ 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79634 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:56.179 Process raid pid: 79634 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79634' 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79634 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 79634 ']' 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.179 08:52:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.179 [2024-10-05 08:52:32.561941] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:15:56.179 [2024-10-05 08:52:32.562074] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.440 [2024-10-05 08:52:32.730651] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.698 [2024-10-05 08:52:32.918661] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.698 [2024-10-05 08:52:33.107056] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.698 [2024-10-05 08:52:33.107089] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.958 [2024-10-05 08:52:33.383696] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.958 [2024-10-05 08:52:33.383749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.958 [2024-10-05 08:52:33.383759] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:56.958 [2024-10-05 08:52:33.383785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:56.958 [2024-10-05 08:52:33.383791] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:56.958 [2024-10-05 08:52:33.383799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:56.958 [2024-10-05 08:52:33.383805] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:56.958 [2024-10-05 08:52:33.383813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.958 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.218 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.218 "name": "Existed_Raid", 00:15:57.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.218 "strip_size_kb": 64, 00:15:57.218 "state": "configuring", 00:15:57.218 "raid_level": "raid5f", 00:15:57.218 "superblock": false, 00:15:57.218 "num_base_bdevs": 4, 00:15:57.218 "num_base_bdevs_discovered": 0, 00:15:57.218 "num_base_bdevs_operational": 4, 00:15:57.218 "base_bdevs_list": [ 00:15:57.218 { 00:15:57.218 "name": "BaseBdev1", 00:15:57.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.218 "is_configured": false, 00:15:57.218 "data_offset": 0, 00:15:57.218 "data_size": 0 00:15:57.218 }, 00:15:57.218 { 00:15:57.218 "name": "BaseBdev2", 00:15:57.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.218 "is_configured": false, 00:15:57.218 "data_offset": 0, 00:15:57.218 "data_size": 0 00:15:57.218 }, 00:15:57.218 { 00:15:57.218 "name": "BaseBdev3", 00:15:57.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.218 "is_configured": false, 00:15:57.218 "data_offset": 0, 00:15:57.218 "data_size": 0 00:15:57.218 }, 00:15:57.218 { 00:15:57.218 "name": "BaseBdev4", 00:15:57.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.218 "is_configured": false, 00:15:57.218 "data_offset": 0, 00:15:57.218 "data_size": 0 00:15:57.218 } 00:15:57.218 ] 00:15:57.218 }' 00:15:57.218 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.218 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.478 [2024-10-05 08:52:33.842941] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.478 [2024-10-05 08:52:33.842991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.478 [2024-10-05 08:52:33.854932] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.478 [2024-10-05 08:52:33.854981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.478 [2024-10-05 08:52:33.855005] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.478 [2024-10-05 08:52:33.855014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.478 [2024-10-05 08:52:33.855020] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:57.478 [2024-10-05 08:52:33.855028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:57.478 [2024-10-05 08:52:33.855034] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:57.478 [2024-10-05 08:52:33.855042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.478 [2024-10-05 08:52:33.931824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.478 BaseBdev1 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.478 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.738 [ 00:15:57.738 { 00:15:57.738 "name": "BaseBdev1", 00:15:57.738 "aliases": [ 00:15:57.738 "2c662114-0d27-4da6-99f9-412a4ac5bfce" 00:15:57.738 ], 00:15:57.738 "product_name": "Malloc disk", 00:15:57.738 "block_size": 512, 00:15:57.738 "num_blocks": 65536, 00:15:57.738 "uuid": "2c662114-0d27-4da6-99f9-412a4ac5bfce", 00:15:57.738 "assigned_rate_limits": { 00:15:57.738 "rw_ios_per_sec": 0, 00:15:57.738 "rw_mbytes_per_sec": 0, 00:15:57.738 "r_mbytes_per_sec": 0, 00:15:57.738 "w_mbytes_per_sec": 0 00:15:57.738 }, 00:15:57.738 "claimed": true, 00:15:57.738 "claim_type": "exclusive_write", 00:15:57.738 "zoned": false, 00:15:57.738 "supported_io_types": { 00:15:57.738 "read": true, 00:15:57.738 "write": true, 00:15:57.738 "unmap": true, 00:15:57.738 "flush": true, 00:15:57.738 "reset": true, 00:15:57.738 "nvme_admin": false, 00:15:57.738 "nvme_io": false, 00:15:57.738 "nvme_io_md": false, 00:15:57.738 "write_zeroes": true, 00:15:57.738 "zcopy": true, 00:15:57.738 "get_zone_info": false, 00:15:57.738 "zone_management": false, 00:15:57.738 "zone_append": false, 00:15:57.738 "compare": false, 00:15:57.738 "compare_and_write": false, 00:15:57.738 "abort": true, 00:15:57.738 "seek_hole": false, 00:15:57.738 "seek_data": false, 00:15:57.738 "copy": true, 00:15:57.738 "nvme_iov_md": false 00:15:57.738 }, 00:15:57.738 "memory_domains": [ 00:15:57.738 { 00:15:57.738 "dma_device_id": "system", 00:15:57.738 "dma_device_type": 1 00:15:57.738 }, 00:15:57.738 { 00:15:57.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.738 "dma_device_type": 2 00:15:57.738 } 00:15:57.738 ], 00:15:57.738 "driver_specific": {} 00:15:57.738 } 00:15:57.738 ] 00:15:57.738 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.738 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:57.738 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.738 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.738 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.739 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.739 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.739 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.739 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.739 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.739 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.739 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.739 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.739 08:52:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.739 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.739 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.739 08:52:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.739 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.739 "name": "Existed_Raid", 00:15:57.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.739 "strip_size_kb": 64, 00:15:57.739 "state": "configuring", 00:15:57.739 "raid_level": "raid5f", 00:15:57.739 "superblock": false, 00:15:57.739 "num_base_bdevs": 4, 00:15:57.739 "num_base_bdevs_discovered": 1, 00:15:57.739 "num_base_bdevs_operational": 4, 00:15:57.739 "base_bdevs_list": [ 00:15:57.739 { 00:15:57.739 "name": "BaseBdev1", 00:15:57.739 "uuid": "2c662114-0d27-4da6-99f9-412a4ac5bfce", 00:15:57.739 "is_configured": true, 00:15:57.739 "data_offset": 0, 00:15:57.739 "data_size": 65536 00:15:57.739 }, 00:15:57.739 { 00:15:57.739 "name": "BaseBdev2", 00:15:57.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.739 "is_configured": false, 00:15:57.739 "data_offset": 0, 00:15:57.739 "data_size": 0 00:15:57.739 }, 00:15:57.739 { 00:15:57.739 "name": "BaseBdev3", 00:15:57.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.739 "is_configured": false, 00:15:57.739 "data_offset": 0, 00:15:57.739 "data_size": 0 00:15:57.739 }, 00:15:57.739 { 00:15:57.739 "name": "BaseBdev4", 00:15:57.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.739 "is_configured": false, 00:15:57.739 "data_offset": 0, 00:15:57.739 "data_size": 0 00:15:57.739 } 00:15:57.739 ] 00:15:57.739 }' 00:15:57.739 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.739 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.998 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:57.998 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.998 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.998 [2024-10-05 08:52:34.434977] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.998 [2024-10-05 08:52:34.435022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:57.998 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.998 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:57.998 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.998 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.998 [2024-10-05 08:52:34.443005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.998 [2024-10-05 08:52:34.444824] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.998 [2024-10-05 08:52:34.444870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.999 [2024-10-05 08:52:34.444880] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:57.999 [2024-10-05 08:52:34.444891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:57.999 [2024-10-05 08:52:34.444897] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:57.999 [2024-10-05 08:52:34.444905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.999 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.258 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.258 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.258 "name": "Existed_Raid", 00:15:58.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.258 "strip_size_kb": 64, 00:15:58.258 "state": "configuring", 00:15:58.258 "raid_level": "raid5f", 00:15:58.258 "superblock": false, 00:15:58.258 "num_base_bdevs": 4, 00:15:58.258 "num_base_bdevs_discovered": 1, 00:15:58.258 "num_base_bdevs_operational": 4, 00:15:58.258 "base_bdevs_list": [ 00:15:58.258 { 00:15:58.258 "name": "BaseBdev1", 00:15:58.258 "uuid": "2c662114-0d27-4da6-99f9-412a4ac5bfce", 00:15:58.258 "is_configured": true, 00:15:58.258 "data_offset": 0, 00:15:58.258 "data_size": 65536 00:15:58.258 }, 00:15:58.258 { 00:15:58.258 "name": "BaseBdev2", 00:15:58.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.258 "is_configured": false, 00:15:58.258 "data_offset": 0, 00:15:58.258 "data_size": 0 00:15:58.258 }, 00:15:58.258 { 00:15:58.258 "name": "BaseBdev3", 00:15:58.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.258 "is_configured": false, 00:15:58.258 "data_offset": 0, 00:15:58.258 "data_size": 0 00:15:58.258 }, 00:15:58.258 { 00:15:58.258 "name": "BaseBdev4", 00:15:58.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.258 "is_configured": false, 00:15:58.258 "data_offset": 0, 00:15:58.258 "data_size": 0 00:15:58.258 } 00:15:58.258 ] 00:15:58.258 }' 00:15:58.258 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.258 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.519 [2024-10-05 08:52:34.944858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.519 BaseBdev2 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.519 [ 00:15:58.519 { 00:15:58.519 "name": "BaseBdev2", 00:15:58.519 "aliases": [ 00:15:58.519 "bf846f1c-6dfe-4bbe-9ff7-8fff58e50104" 00:15:58.519 ], 00:15:58.519 "product_name": "Malloc disk", 00:15:58.519 "block_size": 512, 00:15:58.519 "num_blocks": 65536, 00:15:58.519 "uuid": "bf846f1c-6dfe-4bbe-9ff7-8fff58e50104", 00:15:58.519 "assigned_rate_limits": { 00:15:58.519 "rw_ios_per_sec": 0, 00:15:58.519 "rw_mbytes_per_sec": 0, 00:15:58.519 "r_mbytes_per_sec": 0, 00:15:58.519 "w_mbytes_per_sec": 0 00:15:58.519 }, 00:15:58.519 "claimed": true, 00:15:58.519 "claim_type": "exclusive_write", 00:15:58.519 "zoned": false, 00:15:58.519 "supported_io_types": { 00:15:58.519 "read": true, 00:15:58.519 "write": true, 00:15:58.519 "unmap": true, 00:15:58.519 "flush": true, 00:15:58.519 "reset": true, 00:15:58.519 "nvme_admin": false, 00:15:58.519 "nvme_io": false, 00:15:58.519 "nvme_io_md": false, 00:15:58.519 "write_zeroes": true, 00:15:58.519 "zcopy": true, 00:15:58.519 "get_zone_info": false, 00:15:58.519 "zone_management": false, 00:15:58.519 "zone_append": false, 00:15:58.519 "compare": false, 00:15:58.519 "compare_and_write": false, 00:15:58.519 "abort": true, 00:15:58.519 "seek_hole": false, 00:15:58.519 "seek_data": false, 00:15:58.519 "copy": true, 00:15:58.519 "nvme_iov_md": false 00:15:58.519 }, 00:15:58.519 "memory_domains": [ 00:15:58.519 { 00:15:58.519 "dma_device_id": "system", 00:15:58.519 "dma_device_type": 1 00:15:58.519 }, 00:15:58.519 { 00:15:58.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.519 "dma_device_type": 2 00:15:58.519 } 00:15:58.519 ], 00:15:58.519 "driver_specific": {} 00:15:58.519 } 00:15:58.519 ] 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.519 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.779 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.779 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.779 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.779 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.779 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.779 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.779 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.779 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.779 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.779 08:52:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.779 08:52:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.779 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.779 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.779 "name": "Existed_Raid", 00:15:58.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.779 "strip_size_kb": 64, 00:15:58.779 "state": "configuring", 00:15:58.779 "raid_level": "raid5f", 00:15:58.779 "superblock": false, 00:15:58.779 "num_base_bdevs": 4, 00:15:58.779 "num_base_bdevs_discovered": 2, 00:15:58.779 "num_base_bdevs_operational": 4, 00:15:58.779 "base_bdevs_list": [ 00:15:58.779 { 00:15:58.779 "name": "BaseBdev1", 00:15:58.779 "uuid": "2c662114-0d27-4da6-99f9-412a4ac5bfce", 00:15:58.779 "is_configured": true, 00:15:58.779 "data_offset": 0, 00:15:58.779 "data_size": 65536 00:15:58.779 }, 00:15:58.779 { 00:15:58.779 "name": "BaseBdev2", 00:15:58.779 "uuid": "bf846f1c-6dfe-4bbe-9ff7-8fff58e50104", 00:15:58.779 "is_configured": true, 00:15:58.779 "data_offset": 0, 00:15:58.779 "data_size": 65536 00:15:58.779 }, 00:15:58.779 { 00:15:58.779 "name": "BaseBdev3", 00:15:58.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.779 "is_configured": false, 00:15:58.779 "data_offset": 0, 00:15:58.779 "data_size": 0 00:15:58.779 }, 00:15:58.779 { 00:15:58.779 "name": "BaseBdev4", 00:15:58.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.779 "is_configured": false, 00:15:58.779 "data_offset": 0, 00:15:58.779 "data_size": 0 00:15:58.779 } 00:15:58.779 ] 00:15:58.779 }' 00:15:58.779 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.779 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.038 [2024-10-05 08:52:35.449068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.038 BaseBdev3 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.038 [ 00:15:59.038 { 00:15:59.038 "name": "BaseBdev3", 00:15:59.038 "aliases": [ 00:15:59.038 "bf8e57a3-1073-4fe7-8293-70ad3ff33432" 00:15:59.038 ], 00:15:59.038 "product_name": "Malloc disk", 00:15:59.038 "block_size": 512, 00:15:59.038 "num_blocks": 65536, 00:15:59.038 "uuid": "bf8e57a3-1073-4fe7-8293-70ad3ff33432", 00:15:59.038 "assigned_rate_limits": { 00:15:59.038 "rw_ios_per_sec": 0, 00:15:59.038 "rw_mbytes_per_sec": 0, 00:15:59.038 "r_mbytes_per_sec": 0, 00:15:59.038 "w_mbytes_per_sec": 0 00:15:59.038 }, 00:15:59.038 "claimed": true, 00:15:59.038 "claim_type": "exclusive_write", 00:15:59.038 "zoned": false, 00:15:59.038 "supported_io_types": { 00:15:59.038 "read": true, 00:15:59.038 "write": true, 00:15:59.038 "unmap": true, 00:15:59.038 "flush": true, 00:15:59.038 "reset": true, 00:15:59.038 "nvme_admin": false, 00:15:59.038 "nvme_io": false, 00:15:59.038 "nvme_io_md": false, 00:15:59.038 "write_zeroes": true, 00:15:59.038 "zcopy": true, 00:15:59.038 "get_zone_info": false, 00:15:59.038 "zone_management": false, 00:15:59.038 "zone_append": false, 00:15:59.038 "compare": false, 00:15:59.038 "compare_and_write": false, 00:15:59.038 "abort": true, 00:15:59.038 "seek_hole": false, 00:15:59.038 "seek_data": false, 00:15:59.038 "copy": true, 00:15:59.038 "nvme_iov_md": false 00:15:59.038 }, 00:15:59.038 "memory_domains": [ 00:15:59.038 { 00:15:59.038 "dma_device_id": "system", 00:15:59.038 "dma_device_type": 1 00:15:59.038 }, 00:15:59.038 { 00:15:59.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.038 "dma_device_type": 2 00:15:59.038 } 00:15:59.038 ], 00:15:59.038 "driver_specific": {} 00:15:59.038 } 00:15:59.038 ] 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.038 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.297 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.297 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.297 "name": "Existed_Raid", 00:15:59.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.297 "strip_size_kb": 64, 00:15:59.297 "state": "configuring", 00:15:59.297 "raid_level": "raid5f", 00:15:59.297 "superblock": false, 00:15:59.297 "num_base_bdevs": 4, 00:15:59.297 "num_base_bdevs_discovered": 3, 00:15:59.297 "num_base_bdevs_operational": 4, 00:15:59.297 "base_bdevs_list": [ 00:15:59.297 { 00:15:59.297 "name": "BaseBdev1", 00:15:59.297 "uuid": "2c662114-0d27-4da6-99f9-412a4ac5bfce", 00:15:59.297 "is_configured": true, 00:15:59.297 "data_offset": 0, 00:15:59.297 "data_size": 65536 00:15:59.297 }, 00:15:59.297 { 00:15:59.297 "name": "BaseBdev2", 00:15:59.297 "uuid": "bf846f1c-6dfe-4bbe-9ff7-8fff58e50104", 00:15:59.297 "is_configured": true, 00:15:59.297 "data_offset": 0, 00:15:59.297 "data_size": 65536 00:15:59.297 }, 00:15:59.297 { 00:15:59.297 "name": "BaseBdev3", 00:15:59.297 "uuid": "bf8e57a3-1073-4fe7-8293-70ad3ff33432", 00:15:59.297 "is_configured": true, 00:15:59.297 "data_offset": 0, 00:15:59.297 "data_size": 65536 00:15:59.297 }, 00:15:59.297 { 00:15:59.297 "name": "BaseBdev4", 00:15:59.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.297 "is_configured": false, 00:15:59.297 "data_offset": 0, 00:15:59.297 "data_size": 0 00:15:59.297 } 00:15:59.297 ] 00:15:59.297 }' 00:15:59.297 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.297 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.556 [2024-10-05 08:52:35.959948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:59.556 [2024-10-05 08:52:35.960115] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:59.556 [2024-10-05 08:52:35.960148] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:59.556 [2024-10-05 08:52:35.960424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:59.556 [2024-10-05 08:52:35.966737] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:59.556 [2024-10-05 08:52:35.966810] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:59.556 [2024-10-05 08:52:35.967103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.556 BaseBdev4 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.556 08:52:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.556 [ 00:15:59.556 { 00:15:59.556 "name": "BaseBdev4", 00:15:59.556 "aliases": [ 00:15:59.556 "da24731a-3f77-41f5-bb2e-34688aba1865" 00:15:59.556 ], 00:15:59.556 "product_name": "Malloc disk", 00:15:59.556 "block_size": 512, 00:15:59.556 "num_blocks": 65536, 00:15:59.556 "uuid": "da24731a-3f77-41f5-bb2e-34688aba1865", 00:15:59.556 "assigned_rate_limits": { 00:15:59.556 "rw_ios_per_sec": 0, 00:15:59.556 "rw_mbytes_per_sec": 0, 00:15:59.556 "r_mbytes_per_sec": 0, 00:15:59.556 "w_mbytes_per_sec": 0 00:15:59.556 }, 00:15:59.556 "claimed": true, 00:15:59.556 "claim_type": "exclusive_write", 00:15:59.556 "zoned": false, 00:15:59.556 "supported_io_types": { 00:15:59.556 "read": true, 00:15:59.556 "write": true, 00:15:59.556 "unmap": true, 00:15:59.556 "flush": true, 00:15:59.556 "reset": true, 00:15:59.556 "nvme_admin": false, 00:15:59.556 "nvme_io": false, 00:15:59.556 "nvme_io_md": false, 00:15:59.556 "write_zeroes": true, 00:15:59.556 "zcopy": true, 00:15:59.556 "get_zone_info": false, 00:15:59.556 "zone_management": false, 00:15:59.556 "zone_append": false, 00:15:59.556 "compare": false, 00:15:59.556 "compare_and_write": false, 00:15:59.556 "abort": true, 00:15:59.556 "seek_hole": false, 00:15:59.557 "seek_data": false, 00:15:59.557 "copy": true, 00:15:59.557 "nvme_iov_md": false 00:15:59.557 }, 00:15:59.557 "memory_domains": [ 00:15:59.557 { 00:15:59.557 "dma_device_id": "system", 00:15:59.557 "dma_device_type": 1 00:15:59.557 }, 00:15:59.557 { 00:15:59.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.557 "dma_device_type": 2 00:15:59.557 } 00:15:59.557 ], 00:15:59.557 "driver_specific": {} 00:15:59.557 } 00:15:59.557 ] 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.557 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.815 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.815 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.815 "name": "Existed_Raid", 00:15:59.815 "uuid": "af6fc613-11b0-436b-ac4b-435367d11090", 00:15:59.815 "strip_size_kb": 64, 00:15:59.815 "state": "online", 00:15:59.815 "raid_level": "raid5f", 00:15:59.815 "superblock": false, 00:15:59.815 "num_base_bdevs": 4, 00:15:59.815 "num_base_bdevs_discovered": 4, 00:15:59.815 "num_base_bdevs_operational": 4, 00:15:59.815 "base_bdevs_list": [ 00:15:59.815 { 00:15:59.815 "name": "BaseBdev1", 00:15:59.815 "uuid": "2c662114-0d27-4da6-99f9-412a4ac5bfce", 00:15:59.815 "is_configured": true, 00:15:59.815 "data_offset": 0, 00:15:59.815 "data_size": 65536 00:15:59.815 }, 00:15:59.815 { 00:15:59.815 "name": "BaseBdev2", 00:15:59.815 "uuid": "bf846f1c-6dfe-4bbe-9ff7-8fff58e50104", 00:15:59.815 "is_configured": true, 00:15:59.815 "data_offset": 0, 00:15:59.815 "data_size": 65536 00:15:59.815 }, 00:15:59.815 { 00:15:59.815 "name": "BaseBdev3", 00:15:59.815 "uuid": "bf8e57a3-1073-4fe7-8293-70ad3ff33432", 00:15:59.815 "is_configured": true, 00:15:59.815 "data_offset": 0, 00:15:59.815 "data_size": 65536 00:15:59.815 }, 00:15:59.815 { 00:15:59.815 "name": "BaseBdev4", 00:15:59.815 "uuid": "da24731a-3f77-41f5-bb2e-34688aba1865", 00:15:59.815 "is_configured": true, 00:15:59.815 "data_offset": 0, 00:15:59.815 "data_size": 65536 00:15:59.815 } 00:15:59.815 ] 00:15:59.815 }' 00:15:59.815 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.815 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:00.123 [2024-10-05 08:52:36.486003] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:00.123 "name": "Existed_Raid", 00:16:00.123 "aliases": [ 00:16:00.123 "af6fc613-11b0-436b-ac4b-435367d11090" 00:16:00.123 ], 00:16:00.123 "product_name": "Raid Volume", 00:16:00.123 "block_size": 512, 00:16:00.123 "num_blocks": 196608, 00:16:00.123 "uuid": "af6fc613-11b0-436b-ac4b-435367d11090", 00:16:00.123 "assigned_rate_limits": { 00:16:00.123 "rw_ios_per_sec": 0, 00:16:00.123 "rw_mbytes_per_sec": 0, 00:16:00.123 "r_mbytes_per_sec": 0, 00:16:00.123 "w_mbytes_per_sec": 0 00:16:00.123 }, 00:16:00.123 "claimed": false, 00:16:00.123 "zoned": false, 00:16:00.123 "supported_io_types": { 00:16:00.123 "read": true, 00:16:00.123 "write": true, 00:16:00.123 "unmap": false, 00:16:00.123 "flush": false, 00:16:00.123 "reset": true, 00:16:00.123 "nvme_admin": false, 00:16:00.123 "nvme_io": false, 00:16:00.123 "nvme_io_md": false, 00:16:00.123 "write_zeroes": true, 00:16:00.123 "zcopy": false, 00:16:00.123 "get_zone_info": false, 00:16:00.123 "zone_management": false, 00:16:00.123 "zone_append": false, 00:16:00.123 "compare": false, 00:16:00.123 "compare_and_write": false, 00:16:00.123 "abort": false, 00:16:00.123 "seek_hole": false, 00:16:00.123 "seek_data": false, 00:16:00.123 "copy": false, 00:16:00.123 "nvme_iov_md": false 00:16:00.123 }, 00:16:00.123 "driver_specific": { 00:16:00.123 "raid": { 00:16:00.123 "uuid": "af6fc613-11b0-436b-ac4b-435367d11090", 00:16:00.123 "strip_size_kb": 64, 00:16:00.123 "state": "online", 00:16:00.123 "raid_level": "raid5f", 00:16:00.123 "superblock": false, 00:16:00.123 "num_base_bdevs": 4, 00:16:00.123 "num_base_bdevs_discovered": 4, 00:16:00.123 "num_base_bdevs_operational": 4, 00:16:00.123 "base_bdevs_list": [ 00:16:00.123 { 00:16:00.123 "name": "BaseBdev1", 00:16:00.123 "uuid": "2c662114-0d27-4da6-99f9-412a4ac5bfce", 00:16:00.123 "is_configured": true, 00:16:00.123 "data_offset": 0, 00:16:00.123 "data_size": 65536 00:16:00.123 }, 00:16:00.123 { 00:16:00.123 "name": "BaseBdev2", 00:16:00.123 "uuid": "bf846f1c-6dfe-4bbe-9ff7-8fff58e50104", 00:16:00.123 "is_configured": true, 00:16:00.123 "data_offset": 0, 00:16:00.123 "data_size": 65536 00:16:00.123 }, 00:16:00.123 { 00:16:00.123 "name": "BaseBdev3", 00:16:00.123 "uuid": "bf8e57a3-1073-4fe7-8293-70ad3ff33432", 00:16:00.123 "is_configured": true, 00:16:00.123 "data_offset": 0, 00:16:00.123 "data_size": 65536 00:16:00.123 }, 00:16:00.123 { 00:16:00.123 "name": "BaseBdev4", 00:16:00.123 "uuid": "da24731a-3f77-41f5-bb2e-34688aba1865", 00:16:00.123 "is_configured": true, 00:16:00.123 "data_offset": 0, 00:16:00.123 "data_size": 65536 00:16:00.123 } 00:16:00.123 ] 00:16:00.123 } 00:16:00.123 } 00:16:00.123 }' 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:00.123 BaseBdev2 00:16:00.123 BaseBdev3 00:16:00.123 BaseBdev4' 00:16:00.123 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.382 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.382 [2024-10-05 08:52:36.817284] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.641 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.641 "name": "Existed_Raid", 00:16:00.641 "uuid": "af6fc613-11b0-436b-ac4b-435367d11090", 00:16:00.641 "strip_size_kb": 64, 00:16:00.641 "state": "online", 00:16:00.641 "raid_level": "raid5f", 00:16:00.641 "superblock": false, 00:16:00.641 "num_base_bdevs": 4, 00:16:00.641 "num_base_bdevs_discovered": 3, 00:16:00.641 "num_base_bdevs_operational": 3, 00:16:00.641 "base_bdevs_list": [ 00:16:00.641 { 00:16:00.641 "name": null, 00:16:00.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.641 "is_configured": false, 00:16:00.641 "data_offset": 0, 00:16:00.642 "data_size": 65536 00:16:00.642 }, 00:16:00.642 { 00:16:00.642 "name": "BaseBdev2", 00:16:00.642 "uuid": "bf846f1c-6dfe-4bbe-9ff7-8fff58e50104", 00:16:00.642 "is_configured": true, 00:16:00.642 "data_offset": 0, 00:16:00.642 "data_size": 65536 00:16:00.642 }, 00:16:00.642 { 00:16:00.642 "name": "BaseBdev3", 00:16:00.642 "uuid": "bf8e57a3-1073-4fe7-8293-70ad3ff33432", 00:16:00.642 "is_configured": true, 00:16:00.642 "data_offset": 0, 00:16:00.642 "data_size": 65536 00:16:00.642 }, 00:16:00.642 { 00:16:00.642 "name": "BaseBdev4", 00:16:00.642 "uuid": "da24731a-3f77-41f5-bb2e-34688aba1865", 00:16:00.642 "is_configured": true, 00:16:00.642 "data_offset": 0, 00:16:00.642 "data_size": 65536 00:16:00.642 } 00:16:00.642 ] 00:16:00.642 }' 00:16:00.642 08:52:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.642 08:52:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.900 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:00.900 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:00.900 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:00.900 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.900 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.900 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.158 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.158 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:01.158 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:01.158 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:01.158 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.158 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.158 [2024-10-05 08:52:37.421019] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:01.158 [2024-10-05 08:52:37.421138] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.158 [2024-10-05 08:52:37.508655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.158 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.158 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:01.159 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:01.159 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.159 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:01.159 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.159 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.159 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.159 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:01.159 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:01.159 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:01.159 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.159 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.159 [2024-10-05 08:52:37.564586] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.418 [2024-10-05 08:52:37.716100] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:01.418 [2024-10-05 08:52:37.716155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.418 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.678 BaseBdev2 00:16:01.678 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.678 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:01.678 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.679 [ 00:16:01.679 { 00:16:01.679 "name": "BaseBdev2", 00:16:01.679 "aliases": [ 00:16:01.679 "2d819485-c5d8-42f1-bfd1-6f0d88dcd000" 00:16:01.679 ], 00:16:01.679 "product_name": "Malloc disk", 00:16:01.679 "block_size": 512, 00:16:01.679 "num_blocks": 65536, 00:16:01.679 "uuid": "2d819485-c5d8-42f1-bfd1-6f0d88dcd000", 00:16:01.679 "assigned_rate_limits": { 00:16:01.679 "rw_ios_per_sec": 0, 00:16:01.679 "rw_mbytes_per_sec": 0, 00:16:01.679 "r_mbytes_per_sec": 0, 00:16:01.679 "w_mbytes_per_sec": 0 00:16:01.679 }, 00:16:01.679 "claimed": false, 00:16:01.679 "zoned": false, 00:16:01.679 "supported_io_types": { 00:16:01.679 "read": true, 00:16:01.679 "write": true, 00:16:01.679 "unmap": true, 00:16:01.679 "flush": true, 00:16:01.679 "reset": true, 00:16:01.679 "nvme_admin": false, 00:16:01.679 "nvme_io": false, 00:16:01.679 "nvme_io_md": false, 00:16:01.679 "write_zeroes": true, 00:16:01.679 "zcopy": true, 00:16:01.679 "get_zone_info": false, 00:16:01.679 "zone_management": false, 00:16:01.679 "zone_append": false, 00:16:01.679 "compare": false, 00:16:01.679 "compare_and_write": false, 00:16:01.679 "abort": true, 00:16:01.679 "seek_hole": false, 00:16:01.679 "seek_data": false, 00:16:01.679 "copy": true, 00:16:01.679 "nvme_iov_md": false 00:16:01.679 }, 00:16:01.679 "memory_domains": [ 00:16:01.679 { 00:16:01.679 "dma_device_id": "system", 00:16:01.679 "dma_device_type": 1 00:16:01.679 }, 00:16:01.679 { 00:16:01.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.679 "dma_device_type": 2 00:16:01.679 } 00:16:01.679 ], 00:16:01.679 "driver_specific": {} 00:16:01.679 } 00:16:01.679 ] 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.679 BaseBdev3 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.679 08:52:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.679 [ 00:16:01.679 { 00:16:01.679 "name": "BaseBdev3", 00:16:01.679 "aliases": [ 00:16:01.679 "18ee83b6-7427-4d2e-a262-dca2529469ae" 00:16:01.679 ], 00:16:01.679 "product_name": "Malloc disk", 00:16:01.679 "block_size": 512, 00:16:01.679 "num_blocks": 65536, 00:16:01.679 "uuid": "18ee83b6-7427-4d2e-a262-dca2529469ae", 00:16:01.679 "assigned_rate_limits": { 00:16:01.679 "rw_ios_per_sec": 0, 00:16:01.679 "rw_mbytes_per_sec": 0, 00:16:01.679 "r_mbytes_per_sec": 0, 00:16:01.679 "w_mbytes_per_sec": 0 00:16:01.679 }, 00:16:01.679 "claimed": false, 00:16:01.679 "zoned": false, 00:16:01.679 "supported_io_types": { 00:16:01.679 "read": true, 00:16:01.679 "write": true, 00:16:01.679 "unmap": true, 00:16:01.679 "flush": true, 00:16:01.679 "reset": true, 00:16:01.679 "nvme_admin": false, 00:16:01.679 "nvme_io": false, 00:16:01.679 "nvme_io_md": false, 00:16:01.679 "write_zeroes": true, 00:16:01.679 "zcopy": true, 00:16:01.679 "get_zone_info": false, 00:16:01.679 "zone_management": false, 00:16:01.679 "zone_append": false, 00:16:01.679 "compare": false, 00:16:01.679 "compare_and_write": false, 00:16:01.679 "abort": true, 00:16:01.679 "seek_hole": false, 00:16:01.679 "seek_data": false, 00:16:01.679 "copy": true, 00:16:01.679 "nvme_iov_md": false 00:16:01.679 }, 00:16:01.679 "memory_domains": [ 00:16:01.679 { 00:16:01.679 "dma_device_id": "system", 00:16:01.679 "dma_device_type": 1 00:16:01.679 }, 00:16:01.679 { 00:16:01.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.679 "dma_device_type": 2 00:16:01.679 } 00:16:01.679 ], 00:16:01.679 "driver_specific": {} 00:16:01.679 } 00:16:01.679 ] 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.679 BaseBdev4 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.679 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.679 [ 00:16:01.679 { 00:16:01.679 "name": "BaseBdev4", 00:16:01.679 "aliases": [ 00:16:01.679 "c371db57-1c44-4360-976e-7c896f20e82a" 00:16:01.679 ], 00:16:01.679 "product_name": "Malloc disk", 00:16:01.679 "block_size": 512, 00:16:01.679 "num_blocks": 65536, 00:16:01.679 "uuid": "c371db57-1c44-4360-976e-7c896f20e82a", 00:16:01.679 "assigned_rate_limits": { 00:16:01.679 "rw_ios_per_sec": 0, 00:16:01.679 "rw_mbytes_per_sec": 0, 00:16:01.679 "r_mbytes_per_sec": 0, 00:16:01.679 "w_mbytes_per_sec": 0 00:16:01.679 }, 00:16:01.679 "claimed": false, 00:16:01.679 "zoned": false, 00:16:01.679 "supported_io_types": { 00:16:01.679 "read": true, 00:16:01.679 "write": true, 00:16:01.679 "unmap": true, 00:16:01.679 "flush": true, 00:16:01.679 "reset": true, 00:16:01.679 "nvme_admin": false, 00:16:01.679 "nvme_io": false, 00:16:01.679 "nvme_io_md": false, 00:16:01.679 "write_zeroes": true, 00:16:01.679 "zcopy": true, 00:16:01.679 "get_zone_info": false, 00:16:01.679 "zone_management": false, 00:16:01.679 "zone_append": false, 00:16:01.679 "compare": false, 00:16:01.679 "compare_and_write": false, 00:16:01.679 "abort": true, 00:16:01.679 "seek_hole": false, 00:16:01.679 "seek_data": false, 00:16:01.679 "copy": true, 00:16:01.679 "nvme_iov_md": false 00:16:01.679 }, 00:16:01.679 "memory_domains": [ 00:16:01.679 { 00:16:01.679 "dma_device_id": "system", 00:16:01.679 "dma_device_type": 1 00:16:01.679 }, 00:16:01.679 { 00:16:01.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.679 "dma_device_type": 2 00:16:01.679 } 00:16:01.680 ], 00:16:01.680 "driver_specific": {} 00:16:01.680 } 00:16:01.680 ] 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.680 [2024-10-05 08:52:38.109714] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.680 [2024-10-05 08:52:38.109847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.680 [2024-10-05 08:52:38.109904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.680 [2024-10-05 08:52:38.111665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:01.680 [2024-10-05 08:52:38.111758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.680 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.940 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.940 "name": "Existed_Raid", 00:16:01.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.940 "strip_size_kb": 64, 00:16:01.940 "state": "configuring", 00:16:01.940 "raid_level": "raid5f", 00:16:01.940 "superblock": false, 00:16:01.940 "num_base_bdevs": 4, 00:16:01.940 "num_base_bdevs_discovered": 3, 00:16:01.940 "num_base_bdevs_operational": 4, 00:16:01.940 "base_bdevs_list": [ 00:16:01.940 { 00:16:01.940 "name": "BaseBdev1", 00:16:01.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.940 "is_configured": false, 00:16:01.940 "data_offset": 0, 00:16:01.940 "data_size": 0 00:16:01.940 }, 00:16:01.940 { 00:16:01.940 "name": "BaseBdev2", 00:16:01.940 "uuid": "2d819485-c5d8-42f1-bfd1-6f0d88dcd000", 00:16:01.940 "is_configured": true, 00:16:01.940 "data_offset": 0, 00:16:01.940 "data_size": 65536 00:16:01.940 }, 00:16:01.940 { 00:16:01.940 "name": "BaseBdev3", 00:16:01.940 "uuid": "18ee83b6-7427-4d2e-a262-dca2529469ae", 00:16:01.940 "is_configured": true, 00:16:01.940 "data_offset": 0, 00:16:01.940 "data_size": 65536 00:16:01.940 }, 00:16:01.940 { 00:16:01.940 "name": "BaseBdev4", 00:16:01.940 "uuid": "c371db57-1c44-4360-976e-7c896f20e82a", 00:16:01.940 "is_configured": true, 00:16:01.940 "data_offset": 0, 00:16:01.940 "data_size": 65536 00:16:01.940 } 00:16:01.940 ] 00:16:01.940 }' 00:16:01.940 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.940 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.199 [2024-10-05 08:52:38.545014] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.199 "name": "Existed_Raid", 00:16:02.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.199 "strip_size_kb": 64, 00:16:02.199 "state": "configuring", 00:16:02.199 "raid_level": "raid5f", 00:16:02.199 "superblock": false, 00:16:02.199 "num_base_bdevs": 4, 00:16:02.199 "num_base_bdevs_discovered": 2, 00:16:02.199 "num_base_bdevs_operational": 4, 00:16:02.199 "base_bdevs_list": [ 00:16:02.199 { 00:16:02.199 "name": "BaseBdev1", 00:16:02.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.199 "is_configured": false, 00:16:02.199 "data_offset": 0, 00:16:02.199 "data_size": 0 00:16:02.199 }, 00:16:02.199 { 00:16:02.199 "name": null, 00:16:02.199 "uuid": "2d819485-c5d8-42f1-bfd1-6f0d88dcd000", 00:16:02.199 "is_configured": false, 00:16:02.199 "data_offset": 0, 00:16:02.199 "data_size": 65536 00:16:02.199 }, 00:16:02.199 { 00:16:02.199 "name": "BaseBdev3", 00:16:02.199 "uuid": "18ee83b6-7427-4d2e-a262-dca2529469ae", 00:16:02.199 "is_configured": true, 00:16:02.199 "data_offset": 0, 00:16:02.199 "data_size": 65536 00:16:02.199 }, 00:16:02.199 { 00:16:02.199 "name": "BaseBdev4", 00:16:02.199 "uuid": "c371db57-1c44-4360-976e-7c896f20e82a", 00:16:02.199 "is_configured": true, 00:16:02.199 "data_offset": 0, 00:16:02.199 "data_size": 65536 00:16:02.199 } 00:16:02.199 ] 00:16:02.199 }' 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.199 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.768 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.768 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:02.768 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.768 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.768 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.768 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:02.768 08:52:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:02.768 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.768 08:52:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.768 [2024-10-05 08:52:39.018659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.768 BaseBdev1 00:16:02.768 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.768 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:02.768 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:02.768 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:02.768 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:02.768 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:02.768 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.769 [ 00:16:02.769 { 00:16:02.769 "name": "BaseBdev1", 00:16:02.769 "aliases": [ 00:16:02.769 "d61c2b60-3fe5-4b50-a021-734ae903eecd" 00:16:02.769 ], 00:16:02.769 "product_name": "Malloc disk", 00:16:02.769 "block_size": 512, 00:16:02.769 "num_blocks": 65536, 00:16:02.769 "uuid": "d61c2b60-3fe5-4b50-a021-734ae903eecd", 00:16:02.769 "assigned_rate_limits": { 00:16:02.769 "rw_ios_per_sec": 0, 00:16:02.769 "rw_mbytes_per_sec": 0, 00:16:02.769 "r_mbytes_per_sec": 0, 00:16:02.769 "w_mbytes_per_sec": 0 00:16:02.769 }, 00:16:02.769 "claimed": true, 00:16:02.769 "claim_type": "exclusive_write", 00:16:02.769 "zoned": false, 00:16:02.769 "supported_io_types": { 00:16:02.769 "read": true, 00:16:02.769 "write": true, 00:16:02.769 "unmap": true, 00:16:02.769 "flush": true, 00:16:02.769 "reset": true, 00:16:02.769 "nvme_admin": false, 00:16:02.769 "nvme_io": false, 00:16:02.769 "nvme_io_md": false, 00:16:02.769 "write_zeroes": true, 00:16:02.769 "zcopy": true, 00:16:02.769 "get_zone_info": false, 00:16:02.769 "zone_management": false, 00:16:02.769 "zone_append": false, 00:16:02.769 "compare": false, 00:16:02.769 "compare_and_write": false, 00:16:02.769 "abort": true, 00:16:02.769 "seek_hole": false, 00:16:02.769 "seek_data": false, 00:16:02.769 "copy": true, 00:16:02.769 "nvme_iov_md": false 00:16:02.769 }, 00:16:02.769 "memory_domains": [ 00:16:02.769 { 00:16:02.769 "dma_device_id": "system", 00:16:02.769 "dma_device_type": 1 00:16:02.769 }, 00:16:02.769 { 00:16:02.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.769 "dma_device_type": 2 00:16:02.769 } 00:16:02.769 ], 00:16:02.769 "driver_specific": {} 00:16:02.769 } 00:16:02.769 ] 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.769 "name": "Existed_Raid", 00:16:02.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.769 "strip_size_kb": 64, 00:16:02.769 "state": "configuring", 00:16:02.769 "raid_level": "raid5f", 00:16:02.769 "superblock": false, 00:16:02.769 "num_base_bdevs": 4, 00:16:02.769 "num_base_bdevs_discovered": 3, 00:16:02.769 "num_base_bdevs_operational": 4, 00:16:02.769 "base_bdevs_list": [ 00:16:02.769 { 00:16:02.769 "name": "BaseBdev1", 00:16:02.769 "uuid": "d61c2b60-3fe5-4b50-a021-734ae903eecd", 00:16:02.769 "is_configured": true, 00:16:02.769 "data_offset": 0, 00:16:02.769 "data_size": 65536 00:16:02.769 }, 00:16:02.769 { 00:16:02.769 "name": null, 00:16:02.769 "uuid": "2d819485-c5d8-42f1-bfd1-6f0d88dcd000", 00:16:02.769 "is_configured": false, 00:16:02.769 "data_offset": 0, 00:16:02.769 "data_size": 65536 00:16:02.769 }, 00:16:02.769 { 00:16:02.769 "name": "BaseBdev3", 00:16:02.769 "uuid": "18ee83b6-7427-4d2e-a262-dca2529469ae", 00:16:02.769 "is_configured": true, 00:16:02.769 "data_offset": 0, 00:16:02.769 "data_size": 65536 00:16:02.769 }, 00:16:02.769 { 00:16:02.769 "name": "BaseBdev4", 00:16:02.769 "uuid": "c371db57-1c44-4360-976e-7c896f20e82a", 00:16:02.769 "is_configured": true, 00:16:02.769 "data_offset": 0, 00:16:02.769 "data_size": 65536 00:16:02.769 } 00:16:02.769 ] 00:16:02.769 }' 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.769 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.339 [2024-10-05 08:52:39.601694] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.339 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.339 "name": "Existed_Raid", 00:16:03.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.339 "strip_size_kb": 64, 00:16:03.339 "state": "configuring", 00:16:03.339 "raid_level": "raid5f", 00:16:03.339 "superblock": false, 00:16:03.339 "num_base_bdevs": 4, 00:16:03.339 "num_base_bdevs_discovered": 2, 00:16:03.339 "num_base_bdevs_operational": 4, 00:16:03.339 "base_bdevs_list": [ 00:16:03.339 { 00:16:03.339 "name": "BaseBdev1", 00:16:03.339 "uuid": "d61c2b60-3fe5-4b50-a021-734ae903eecd", 00:16:03.340 "is_configured": true, 00:16:03.340 "data_offset": 0, 00:16:03.340 "data_size": 65536 00:16:03.340 }, 00:16:03.340 { 00:16:03.340 "name": null, 00:16:03.340 "uuid": "2d819485-c5d8-42f1-bfd1-6f0d88dcd000", 00:16:03.340 "is_configured": false, 00:16:03.340 "data_offset": 0, 00:16:03.340 "data_size": 65536 00:16:03.340 }, 00:16:03.340 { 00:16:03.340 "name": null, 00:16:03.340 "uuid": "18ee83b6-7427-4d2e-a262-dca2529469ae", 00:16:03.340 "is_configured": false, 00:16:03.340 "data_offset": 0, 00:16:03.340 "data_size": 65536 00:16:03.340 }, 00:16:03.340 { 00:16:03.340 "name": "BaseBdev4", 00:16:03.340 "uuid": "c371db57-1c44-4360-976e-7c896f20e82a", 00:16:03.340 "is_configured": true, 00:16:03.340 "data_offset": 0, 00:16:03.340 "data_size": 65536 00:16:03.340 } 00:16:03.340 ] 00:16:03.340 }' 00:16:03.340 08:52:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.340 08:52:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.600 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.600 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.600 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.600 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:03.600 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.860 [2024-10-05 08:52:40.081120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.860 "name": "Existed_Raid", 00:16:03.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.860 "strip_size_kb": 64, 00:16:03.860 "state": "configuring", 00:16:03.860 "raid_level": "raid5f", 00:16:03.860 "superblock": false, 00:16:03.860 "num_base_bdevs": 4, 00:16:03.860 "num_base_bdevs_discovered": 3, 00:16:03.860 "num_base_bdevs_operational": 4, 00:16:03.860 "base_bdevs_list": [ 00:16:03.860 { 00:16:03.860 "name": "BaseBdev1", 00:16:03.860 "uuid": "d61c2b60-3fe5-4b50-a021-734ae903eecd", 00:16:03.860 "is_configured": true, 00:16:03.860 "data_offset": 0, 00:16:03.860 "data_size": 65536 00:16:03.860 }, 00:16:03.860 { 00:16:03.860 "name": null, 00:16:03.860 "uuid": "2d819485-c5d8-42f1-bfd1-6f0d88dcd000", 00:16:03.860 "is_configured": false, 00:16:03.860 "data_offset": 0, 00:16:03.860 "data_size": 65536 00:16:03.860 }, 00:16:03.860 { 00:16:03.860 "name": "BaseBdev3", 00:16:03.860 "uuid": "18ee83b6-7427-4d2e-a262-dca2529469ae", 00:16:03.860 "is_configured": true, 00:16:03.860 "data_offset": 0, 00:16:03.860 "data_size": 65536 00:16:03.860 }, 00:16:03.860 { 00:16:03.860 "name": "BaseBdev4", 00:16:03.860 "uuid": "c371db57-1c44-4360-976e-7c896f20e82a", 00:16:03.860 "is_configured": true, 00:16:03.860 "data_offset": 0, 00:16:03.860 "data_size": 65536 00:16:03.860 } 00:16:03.860 ] 00:16:03.860 }' 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.860 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.120 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.120 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:04.120 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.120 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.120 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.381 [2024-10-05 08:52:40.616139] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.381 "name": "Existed_Raid", 00:16:04.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.381 "strip_size_kb": 64, 00:16:04.381 "state": "configuring", 00:16:04.381 "raid_level": "raid5f", 00:16:04.381 "superblock": false, 00:16:04.381 "num_base_bdevs": 4, 00:16:04.381 "num_base_bdevs_discovered": 2, 00:16:04.381 "num_base_bdevs_operational": 4, 00:16:04.381 "base_bdevs_list": [ 00:16:04.381 { 00:16:04.381 "name": null, 00:16:04.381 "uuid": "d61c2b60-3fe5-4b50-a021-734ae903eecd", 00:16:04.381 "is_configured": false, 00:16:04.381 "data_offset": 0, 00:16:04.381 "data_size": 65536 00:16:04.381 }, 00:16:04.381 { 00:16:04.381 "name": null, 00:16:04.381 "uuid": "2d819485-c5d8-42f1-bfd1-6f0d88dcd000", 00:16:04.381 "is_configured": false, 00:16:04.381 "data_offset": 0, 00:16:04.381 "data_size": 65536 00:16:04.381 }, 00:16:04.381 { 00:16:04.381 "name": "BaseBdev3", 00:16:04.381 "uuid": "18ee83b6-7427-4d2e-a262-dca2529469ae", 00:16:04.381 "is_configured": true, 00:16:04.381 "data_offset": 0, 00:16:04.381 "data_size": 65536 00:16:04.381 }, 00:16:04.381 { 00:16:04.381 "name": "BaseBdev4", 00:16:04.381 "uuid": "c371db57-1c44-4360-976e-7c896f20e82a", 00:16:04.381 "is_configured": true, 00:16:04.381 "data_offset": 0, 00:16:04.381 "data_size": 65536 00:16:04.381 } 00:16:04.381 ] 00:16:04.381 }' 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.381 08:52:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.951 [2024-10-05 08:52:41.232127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.951 "name": "Existed_Raid", 00:16:04.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.951 "strip_size_kb": 64, 00:16:04.951 "state": "configuring", 00:16:04.951 "raid_level": "raid5f", 00:16:04.951 "superblock": false, 00:16:04.951 "num_base_bdevs": 4, 00:16:04.951 "num_base_bdevs_discovered": 3, 00:16:04.951 "num_base_bdevs_operational": 4, 00:16:04.951 "base_bdevs_list": [ 00:16:04.951 { 00:16:04.951 "name": null, 00:16:04.951 "uuid": "d61c2b60-3fe5-4b50-a021-734ae903eecd", 00:16:04.951 "is_configured": false, 00:16:04.951 "data_offset": 0, 00:16:04.951 "data_size": 65536 00:16:04.951 }, 00:16:04.951 { 00:16:04.951 "name": "BaseBdev2", 00:16:04.951 "uuid": "2d819485-c5d8-42f1-bfd1-6f0d88dcd000", 00:16:04.951 "is_configured": true, 00:16:04.951 "data_offset": 0, 00:16:04.951 "data_size": 65536 00:16:04.951 }, 00:16:04.951 { 00:16:04.951 "name": "BaseBdev3", 00:16:04.951 "uuid": "18ee83b6-7427-4d2e-a262-dca2529469ae", 00:16:04.951 "is_configured": true, 00:16:04.951 "data_offset": 0, 00:16:04.951 "data_size": 65536 00:16:04.951 }, 00:16:04.951 { 00:16:04.951 "name": "BaseBdev4", 00:16:04.951 "uuid": "c371db57-1c44-4360-976e-7c896f20e82a", 00:16:04.951 "is_configured": true, 00:16:04.951 "data_offset": 0, 00:16:04.951 "data_size": 65536 00:16:04.951 } 00:16:04.951 ] 00:16:04.951 }' 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.951 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.212 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.212 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:05.212 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.212 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.472 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.472 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:05.472 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.472 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:05.472 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.472 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.472 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.472 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d61c2b60-3fe5-4b50-a021-734ae903eecd 00:16:05.472 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.472 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.472 [2024-10-05 08:52:41.806319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:05.473 [2024-10-05 08:52:41.806437] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:05.473 [2024-10-05 08:52:41.806462] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:05.473 [2024-10-05 08:52:41.806712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:05.473 [2024-10-05 08:52:41.812990] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:05.473 [2024-10-05 08:52:41.813055] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:05.473 [2024-10-05 08:52:41.813311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.473 NewBaseBdev 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.473 [ 00:16:05.473 { 00:16:05.473 "name": "NewBaseBdev", 00:16:05.473 "aliases": [ 00:16:05.473 "d61c2b60-3fe5-4b50-a021-734ae903eecd" 00:16:05.473 ], 00:16:05.473 "product_name": "Malloc disk", 00:16:05.473 "block_size": 512, 00:16:05.473 "num_blocks": 65536, 00:16:05.473 "uuid": "d61c2b60-3fe5-4b50-a021-734ae903eecd", 00:16:05.473 "assigned_rate_limits": { 00:16:05.473 "rw_ios_per_sec": 0, 00:16:05.473 "rw_mbytes_per_sec": 0, 00:16:05.473 "r_mbytes_per_sec": 0, 00:16:05.473 "w_mbytes_per_sec": 0 00:16:05.473 }, 00:16:05.473 "claimed": true, 00:16:05.473 "claim_type": "exclusive_write", 00:16:05.473 "zoned": false, 00:16:05.473 "supported_io_types": { 00:16:05.473 "read": true, 00:16:05.473 "write": true, 00:16:05.473 "unmap": true, 00:16:05.473 "flush": true, 00:16:05.473 "reset": true, 00:16:05.473 "nvme_admin": false, 00:16:05.473 "nvme_io": false, 00:16:05.473 "nvme_io_md": false, 00:16:05.473 "write_zeroes": true, 00:16:05.473 "zcopy": true, 00:16:05.473 "get_zone_info": false, 00:16:05.473 "zone_management": false, 00:16:05.473 "zone_append": false, 00:16:05.473 "compare": false, 00:16:05.473 "compare_and_write": false, 00:16:05.473 "abort": true, 00:16:05.473 "seek_hole": false, 00:16:05.473 "seek_data": false, 00:16:05.473 "copy": true, 00:16:05.473 "nvme_iov_md": false 00:16:05.473 }, 00:16:05.473 "memory_domains": [ 00:16:05.473 { 00:16:05.473 "dma_device_id": "system", 00:16:05.473 "dma_device_type": 1 00:16:05.473 }, 00:16:05.473 { 00:16:05.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.473 "dma_device_type": 2 00:16:05.473 } 00:16:05.473 ], 00:16:05.473 "driver_specific": {} 00:16:05.473 } 00:16:05.473 ] 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.473 "name": "Existed_Raid", 00:16:05.473 "uuid": "ed30624c-f3c7-42bc-8580-5ced89d88754", 00:16:05.473 "strip_size_kb": 64, 00:16:05.473 "state": "online", 00:16:05.473 "raid_level": "raid5f", 00:16:05.473 "superblock": false, 00:16:05.473 "num_base_bdevs": 4, 00:16:05.473 "num_base_bdevs_discovered": 4, 00:16:05.473 "num_base_bdevs_operational": 4, 00:16:05.473 "base_bdevs_list": [ 00:16:05.473 { 00:16:05.473 "name": "NewBaseBdev", 00:16:05.473 "uuid": "d61c2b60-3fe5-4b50-a021-734ae903eecd", 00:16:05.473 "is_configured": true, 00:16:05.473 "data_offset": 0, 00:16:05.473 "data_size": 65536 00:16:05.473 }, 00:16:05.473 { 00:16:05.473 "name": "BaseBdev2", 00:16:05.473 "uuid": "2d819485-c5d8-42f1-bfd1-6f0d88dcd000", 00:16:05.473 "is_configured": true, 00:16:05.473 "data_offset": 0, 00:16:05.473 "data_size": 65536 00:16:05.473 }, 00:16:05.473 { 00:16:05.473 "name": "BaseBdev3", 00:16:05.473 "uuid": "18ee83b6-7427-4d2e-a262-dca2529469ae", 00:16:05.473 "is_configured": true, 00:16:05.473 "data_offset": 0, 00:16:05.473 "data_size": 65536 00:16:05.473 }, 00:16:05.473 { 00:16:05.473 "name": "BaseBdev4", 00:16:05.473 "uuid": "c371db57-1c44-4360-976e-7c896f20e82a", 00:16:05.473 "is_configured": true, 00:16:05.473 "data_offset": 0, 00:16:05.473 "data_size": 65536 00:16:05.473 } 00:16:05.473 ] 00:16:05.473 }' 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.473 08:52:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.043 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:06.043 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:06.043 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:06.043 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:06.043 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:06.043 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:06.043 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:06.043 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:06.043 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.043 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.043 [2024-10-05 08:52:42.324261] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.043 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.043 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:06.043 "name": "Existed_Raid", 00:16:06.043 "aliases": [ 00:16:06.043 "ed30624c-f3c7-42bc-8580-5ced89d88754" 00:16:06.043 ], 00:16:06.043 "product_name": "Raid Volume", 00:16:06.043 "block_size": 512, 00:16:06.043 "num_blocks": 196608, 00:16:06.043 "uuid": "ed30624c-f3c7-42bc-8580-5ced89d88754", 00:16:06.043 "assigned_rate_limits": { 00:16:06.043 "rw_ios_per_sec": 0, 00:16:06.043 "rw_mbytes_per_sec": 0, 00:16:06.043 "r_mbytes_per_sec": 0, 00:16:06.043 "w_mbytes_per_sec": 0 00:16:06.043 }, 00:16:06.043 "claimed": false, 00:16:06.043 "zoned": false, 00:16:06.043 "supported_io_types": { 00:16:06.043 "read": true, 00:16:06.043 "write": true, 00:16:06.043 "unmap": false, 00:16:06.043 "flush": false, 00:16:06.043 "reset": true, 00:16:06.043 "nvme_admin": false, 00:16:06.043 "nvme_io": false, 00:16:06.043 "nvme_io_md": false, 00:16:06.043 "write_zeroes": true, 00:16:06.043 "zcopy": false, 00:16:06.043 "get_zone_info": false, 00:16:06.043 "zone_management": false, 00:16:06.043 "zone_append": false, 00:16:06.043 "compare": false, 00:16:06.043 "compare_and_write": false, 00:16:06.043 "abort": false, 00:16:06.043 "seek_hole": false, 00:16:06.043 "seek_data": false, 00:16:06.043 "copy": false, 00:16:06.043 "nvme_iov_md": false 00:16:06.043 }, 00:16:06.043 "driver_specific": { 00:16:06.043 "raid": { 00:16:06.043 "uuid": "ed30624c-f3c7-42bc-8580-5ced89d88754", 00:16:06.043 "strip_size_kb": 64, 00:16:06.043 "state": "online", 00:16:06.043 "raid_level": "raid5f", 00:16:06.043 "superblock": false, 00:16:06.043 "num_base_bdevs": 4, 00:16:06.043 "num_base_bdevs_discovered": 4, 00:16:06.043 "num_base_bdevs_operational": 4, 00:16:06.043 "base_bdevs_list": [ 00:16:06.043 { 00:16:06.043 "name": "NewBaseBdev", 00:16:06.043 "uuid": "d61c2b60-3fe5-4b50-a021-734ae903eecd", 00:16:06.043 "is_configured": true, 00:16:06.043 "data_offset": 0, 00:16:06.043 "data_size": 65536 00:16:06.043 }, 00:16:06.043 { 00:16:06.043 "name": "BaseBdev2", 00:16:06.043 "uuid": "2d819485-c5d8-42f1-bfd1-6f0d88dcd000", 00:16:06.043 "is_configured": true, 00:16:06.043 "data_offset": 0, 00:16:06.043 "data_size": 65536 00:16:06.043 }, 00:16:06.043 { 00:16:06.043 "name": "BaseBdev3", 00:16:06.043 "uuid": "18ee83b6-7427-4d2e-a262-dca2529469ae", 00:16:06.043 "is_configured": true, 00:16:06.043 "data_offset": 0, 00:16:06.043 "data_size": 65536 00:16:06.043 }, 00:16:06.043 { 00:16:06.043 "name": "BaseBdev4", 00:16:06.043 "uuid": "c371db57-1c44-4360-976e-7c896f20e82a", 00:16:06.043 "is_configured": true, 00:16:06.043 "data_offset": 0, 00:16:06.043 "data_size": 65536 00:16:06.043 } 00:16:06.044 ] 00:16:06.044 } 00:16:06.044 } 00:16:06.044 }' 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:06.044 BaseBdev2 00:16:06.044 BaseBdev3 00:16:06.044 BaseBdev4' 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.044 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.304 [2024-10-05 08:52:42.583666] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.304 [2024-10-05 08:52:42.583694] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.304 [2024-10-05 08:52:42.583756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.304 [2024-10-05 08:52:42.584040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.304 [2024-10-05 08:52:42.584051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79634 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 79634 ']' 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 79634 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79634 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:06.304 killing process with pid 79634 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79634' 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 79634 00:16:06.304 [2024-10-05 08:52:42.624238] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.304 08:52:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 79634 00:16:06.565 [2024-10-05 08:52:42.993123] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:07.948 00:16:07.948 real 0m11.740s 00:16:07.948 user 0m18.542s 00:16:07.948 sys 0m2.290s 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.948 ************************************ 00:16:07.948 END TEST raid5f_state_function_test 00:16:07.948 ************************************ 00:16:07.948 08:52:44 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:07.948 08:52:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:07.948 08:52:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:07.948 08:52:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.948 ************************************ 00:16:07.948 START TEST raid5f_state_function_test_sb 00:16:07.948 ************************************ 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80238 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:07.948 Process raid pid: 80238 00:16:07.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80238' 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80238 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80238 ']' 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.948 08:52:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.949 08:52:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.949 08:52:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.949 08:52:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.949 [2024-10-05 08:52:44.381490] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:16:07.949 [2024-10-05 08:52:44.381613] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.217 [2024-10-05 08:52:44.547890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.508 [2024-10-05 08:52:44.752329] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.508 [2024-10-05 08:52:44.952160] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.508 [2024-10-05 08:52:44.952278] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.770 [2024-10-05 08:52:45.196674] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.770 [2024-10-05 08:52:45.196732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.770 [2024-10-05 08:52:45.196742] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.770 [2024-10-05 08:52:45.196750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.770 [2024-10-05 08:52:45.196756] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:08.770 [2024-10-05 08:52:45.196765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:08.770 [2024-10-05 08:52:45.196771] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:08.770 [2024-10-05 08:52:45.196779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.770 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.030 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.030 "name": "Existed_Raid", 00:16:09.030 "uuid": "9f2a5c5d-e69a-4fbe-80a5-377f408ea1c7", 00:16:09.030 "strip_size_kb": 64, 00:16:09.030 "state": "configuring", 00:16:09.030 "raid_level": "raid5f", 00:16:09.030 "superblock": true, 00:16:09.030 "num_base_bdevs": 4, 00:16:09.030 "num_base_bdevs_discovered": 0, 00:16:09.030 "num_base_bdevs_operational": 4, 00:16:09.030 "base_bdevs_list": [ 00:16:09.030 { 00:16:09.030 "name": "BaseBdev1", 00:16:09.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.030 "is_configured": false, 00:16:09.030 "data_offset": 0, 00:16:09.030 "data_size": 0 00:16:09.030 }, 00:16:09.030 { 00:16:09.030 "name": "BaseBdev2", 00:16:09.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.030 "is_configured": false, 00:16:09.031 "data_offset": 0, 00:16:09.031 "data_size": 0 00:16:09.031 }, 00:16:09.031 { 00:16:09.031 "name": "BaseBdev3", 00:16:09.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.031 "is_configured": false, 00:16:09.031 "data_offset": 0, 00:16:09.031 "data_size": 0 00:16:09.031 }, 00:16:09.031 { 00:16:09.031 "name": "BaseBdev4", 00:16:09.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.031 "is_configured": false, 00:16:09.031 "data_offset": 0, 00:16:09.031 "data_size": 0 00:16:09.031 } 00:16:09.031 ] 00:16:09.031 }' 00:16:09.031 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.031 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.291 [2024-10-05 08:52:45.683714] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.291 [2024-10-05 08:52:45.683814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.291 [2024-10-05 08:52:45.691738] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:09.291 [2024-10-05 08:52:45.691818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:09.291 [2024-10-05 08:52:45.691843] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.291 [2024-10-05 08:52:45.691863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.291 [2024-10-05 08:52:45.691880] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.291 [2024-10-05 08:52:45.691898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.291 [2024-10-05 08:52:45.691915] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:09.291 [2024-10-05 08:52:45.691934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.291 [2024-10-05 08:52:45.745306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.291 BaseBdev1 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.291 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.551 [ 00:16:09.551 { 00:16:09.551 "name": "BaseBdev1", 00:16:09.551 "aliases": [ 00:16:09.551 "e99d05aa-9b11-437c-ba98-364cef48182a" 00:16:09.551 ], 00:16:09.551 "product_name": "Malloc disk", 00:16:09.551 "block_size": 512, 00:16:09.551 "num_blocks": 65536, 00:16:09.551 "uuid": "e99d05aa-9b11-437c-ba98-364cef48182a", 00:16:09.551 "assigned_rate_limits": { 00:16:09.551 "rw_ios_per_sec": 0, 00:16:09.551 "rw_mbytes_per_sec": 0, 00:16:09.551 "r_mbytes_per_sec": 0, 00:16:09.551 "w_mbytes_per_sec": 0 00:16:09.551 }, 00:16:09.551 "claimed": true, 00:16:09.551 "claim_type": "exclusive_write", 00:16:09.551 "zoned": false, 00:16:09.551 "supported_io_types": { 00:16:09.551 "read": true, 00:16:09.551 "write": true, 00:16:09.551 "unmap": true, 00:16:09.551 "flush": true, 00:16:09.551 "reset": true, 00:16:09.551 "nvme_admin": false, 00:16:09.551 "nvme_io": false, 00:16:09.551 "nvme_io_md": false, 00:16:09.551 "write_zeroes": true, 00:16:09.551 "zcopy": true, 00:16:09.551 "get_zone_info": false, 00:16:09.551 "zone_management": false, 00:16:09.551 "zone_append": false, 00:16:09.551 "compare": false, 00:16:09.551 "compare_and_write": false, 00:16:09.551 "abort": true, 00:16:09.551 "seek_hole": false, 00:16:09.551 "seek_data": false, 00:16:09.551 "copy": true, 00:16:09.551 "nvme_iov_md": false 00:16:09.551 }, 00:16:09.551 "memory_domains": [ 00:16:09.551 { 00:16:09.551 "dma_device_id": "system", 00:16:09.551 "dma_device_type": 1 00:16:09.551 }, 00:16:09.551 { 00:16:09.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.551 "dma_device_type": 2 00:16:09.551 } 00:16:09.551 ], 00:16:09.551 "driver_specific": {} 00:16:09.551 } 00:16:09.551 ] 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.551 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.551 "name": "Existed_Raid", 00:16:09.551 "uuid": "ff2866bf-ff49-4bc3-ad15-2306786c0fdc", 00:16:09.551 "strip_size_kb": 64, 00:16:09.551 "state": "configuring", 00:16:09.551 "raid_level": "raid5f", 00:16:09.551 "superblock": true, 00:16:09.551 "num_base_bdevs": 4, 00:16:09.551 "num_base_bdevs_discovered": 1, 00:16:09.551 "num_base_bdevs_operational": 4, 00:16:09.551 "base_bdevs_list": [ 00:16:09.551 { 00:16:09.551 "name": "BaseBdev1", 00:16:09.551 "uuid": "e99d05aa-9b11-437c-ba98-364cef48182a", 00:16:09.552 "is_configured": true, 00:16:09.552 "data_offset": 2048, 00:16:09.552 "data_size": 63488 00:16:09.552 }, 00:16:09.552 { 00:16:09.552 "name": "BaseBdev2", 00:16:09.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.552 "is_configured": false, 00:16:09.552 "data_offset": 0, 00:16:09.552 "data_size": 0 00:16:09.552 }, 00:16:09.552 { 00:16:09.552 "name": "BaseBdev3", 00:16:09.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.552 "is_configured": false, 00:16:09.552 "data_offset": 0, 00:16:09.552 "data_size": 0 00:16:09.552 }, 00:16:09.552 { 00:16:09.552 "name": "BaseBdev4", 00:16:09.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.552 "is_configured": false, 00:16:09.552 "data_offset": 0, 00:16:09.552 "data_size": 0 00:16:09.552 } 00:16:09.552 ] 00:16:09.552 }' 00:16:09.552 08:52:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.552 08:52:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.811 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:09.811 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.811 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.811 [2024-10-05 08:52:46.240580] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.811 [2024-10-05 08:52:46.240620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:09.811 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.811 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:09.811 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.811 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.811 [2024-10-05 08:52:46.252604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.811 [2024-10-05 08:52:46.254339] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.811 [2024-10-05 08:52:46.254428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.811 [2024-10-05 08:52:46.254442] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:09.811 [2024-10-05 08:52:46.254452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:09.811 [2024-10-05 08:52:46.254460] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:09.811 [2024-10-05 08:52:46.254468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:09.811 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.811 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:09.811 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:09.811 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.812 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.072 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.072 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.072 "name": "Existed_Raid", 00:16:10.072 "uuid": "1890c4ba-2977-46d8-8a7f-636dc34ed956", 00:16:10.072 "strip_size_kb": 64, 00:16:10.072 "state": "configuring", 00:16:10.072 "raid_level": "raid5f", 00:16:10.072 "superblock": true, 00:16:10.072 "num_base_bdevs": 4, 00:16:10.072 "num_base_bdevs_discovered": 1, 00:16:10.072 "num_base_bdevs_operational": 4, 00:16:10.072 "base_bdevs_list": [ 00:16:10.072 { 00:16:10.072 "name": "BaseBdev1", 00:16:10.072 "uuid": "e99d05aa-9b11-437c-ba98-364cef48182a", 00:16:10.072 "is_configured": true, 00:16:10.072 "data_offset": 2048, 00:16:10.072 "data_size": 63488 00:16:10.072 }, 00:16:10.072 { 00:16:10.072 "name": "BaseBdev2", 00:16:10.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.072 "is_configured": false, 00:16:10.072 "data_offset": 0, 00:16:10.072 "data_size": 0 00:16:10.072 }, 00:16:10.072 { 00:16:10.072 "name": "BaseBdev3", 00:16:10.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.072 "is_configured": false, 00:16:10.072 "data_offset": 0, 00:16:10.072 "data_size": 0 00:16:10.072 }, 00:16:10.072 { 00:16:10.072 "name": "BaseBdev4", 00:16:10.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.072 "is_configured": false, 00:16:10.072 "data_offset": 0, 00:16:10.072 "data_size": 0 00:16:10.072 } 00:16:10.072 ] 00:16:10.072 }' 00:16:10.072 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.072 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.333 [2024-10-05 08:52:46.765686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.333 BaseBdev2 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.333 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.333 [ 00:16:10.333 { 00:16:10.333 "name": "BaseBdev2", 00:16:10.333 "aliases": [ 00:16:10.333 "4505c590-ac8d-4a3b-8306-71b97df4cf4c" 00:16:10.333 ], 00:16:10.333 "product_name": "Malloc disk", 00:16:10.333 "block_size": 512, 00:16:10.333 "num_blocks": 65536, 00:16:10.333 "uuid": "4505c590-ac8d-4a3b-8306-71b97df4cf4c", 00:16:10.333 "assigned_rate_limits": { 00:16:10.333 "rw_ios_per_sec": 0, 00:16:10.333 "rw_mbytes_per_sec": 0, 00:16:10.333 "r_mbytes_per_sec": 0, 00:16:10.333 "w_mbytes_per_sec": 0 00:16:10.333 }, 00:16:10.333 "claimed": true, 00:16:10.333 "claim_type": "exclusive_write", 00:16:10.333 "zoned": false, 00:16:10.333 "supported_io_types": { 00:16:10.333 "read": true, 00:16:10.333 "write": true, 00:16:10.333 "unmap": true, 00:16:10.333 "flush": true, 00:16:10.333 "reset": true, 00:16:10.333 "nvme_admin": false, 00:16:10.333 "nvme_io": false, 00:16:10.333 "nvme_io_md": false, 00:16:10.333 "write_zeroes": true, 00:16:10.333 "zcopy": true, 00:16:10.333 "get_zone_info": false, 00:16:10.333 "zone_management": false, 00:16:10.333 "zone_append": false, 00:16:10.333 "compare": false, 00:16:10.333 "compare_and_write": false, 00:16:10.333 "abort": true, 00:16:10.333 "seek_hole": false, 00:16:10.333 "seek_data": false, 00:16:10.333 "copy": true, 00:16:10.333 "nvme_iov_md": false 00:16:10.333 }, 00:16:10.333 "memory_domains": [ 00:16:10.333 { 00:16:10.333 "dma_device_id": "system", 00:16:10.333 "dma_device_type": 1 00:16:10.333 }, 00:16:10.333 { 00:16:10.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.593 "dma_device_type": 2 00:16:10.593 } 00:16:10.593 ], 00:16:10.593 "driver_specific": {} 00:16:10.593 } 00:16:10.593 ] 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.593 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.593 "name": "Existed_Raid", 00:16:10.593 "uuid": "1890c4ba-2977-46d8-8a7f-636dc34ed956", 00:16:10.594 "strip_size_kb": 64, 00:16:10.594 "state": "configuring", 00:16:10.594 "raid_level": "raid5f", 00:16:10.594 "superblock": true, 00:16:10.594 "num_base_bdevs": 4, 00:16:10.594 "num_base_bdevs_discovered": 2, 00:16:10.594 "num_base_bdevs_operational": 4, 00:16:10.594 "base_bdevs_list": [ 00:16:10.594 { 00:16:10.594 "name": "BaseBdev1", 00:16:10.594 "uuid": "e99d05aa-9b11-437c-ba98-364cef48182a", 00:16:10.594 "is_configured": true, 00:16:10.594 "data_offset": 2048, 00:16:10.594 "data_size": 63488 00:16:10.594 }, 00:16:10.594 { 00:16:10.594 "name": "BaseBdev2", 00:16:10.594 "uuid": "4505c590-ac8d-4a3b-8306-71b97df4cf4c", 00:16:10.594 "is_configured": true, 00:16:10.594 "data_offset": 2048, 00:16:10.594 "data_size": 63488 00:16:10.594 }, 00:16:10.594 { 00:16:10.594 "name": "BaseBdev3", 00:16:10.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.594 "is_configured": false, 00:16:10.594 "data_offset": 0, 00:16:10.594 "data_size": 0 00:16:10.594 }, 00:16:10.594 { 00:16:10.594 "name": "BaseBdev4", 00:16:10.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.594 "is_configured": false, 00:16:10.594 "data_offset": 0, 00:16:10.594 "data_size": 0 00:16:10.594 } 00:16:10.594 ] 00:16:10.594 }' 00:16:10.594 08:52:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.594 08:52:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.854 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:10.854 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.854 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.854 [2024-10-05 08:52:47.277771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.854 BaseBdev3 00:16:10.854 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.854 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.855 [ 00:16:10.855 { 00:16:10.855 "name": "BaseBdev3", 00:16:10.855 "aliases": [ 00:16:10.855 "58ec5213-6fbc-406e-a698-bbfab266343f" 00:16:10.855 ], 00:16:10.855 "product_name": "Malloc disk", 00:16:10.855 "block_size": 512, 00:16:10.855 "num_blocks": 65536, 00:16:10.855 "uuid": "58ec5213-6fbc-406e-a698-bbfab266343f", 00:16:10.855 "assigned_rate_limits": { 00:16:10.855 "rw_ios_per_sec": 0, 00:16:10.855 "rw_mbytes_per_sec": 0, 00:16:10.855 "r_mbytes_per_sec": 0, 00:16:10.855 "w_mbytes_per_sec": 0 00:16:10.855 }, 00:16:10.855 "claimed": true, 00:16:10.855 "claim_type": "exclusive_write", 00:16:10.855 "zoned": false, 00:16:10.855 "supported_io_types": { 00:16:10.855 "read": true, 00:16:10.855 "write": true, 00:16:10.855 "unmap": true, 00:16:10.855 "flush": true, 00:16:10.855 "reset": true, 00:16:10.855 "nvme_admin": false, 00:16:10.855 "nvme_io": false, 00:16:10.855 "nvme_io_md": false, 00:16:10.855 "write_zeroes": true, 00:16:10.855 "zcopy": true, 00:16:10.855 "get_zone_info": false, 00:16:10.855 "zone_management": false, 00:16:10.855 "zone_append": false, 00:16:10.855 "compare": false, 00:16:10.855 "compare_and_write": false, 00:16:10.855 "abort": true, 00:16:10.855 "seek_hole": false, 00:16:10.855 "seek_data": false, 00:16:10.855 "copy": true, 00:16:10.855 "nvme_iov_md": false 00:16:10.855 }, 00:16:10.855 "memory_domains": [ 00:16:10.855 { 00:16:10.855 "dma_device_id": "system", 00:16:10.855 "dma_device_type": 1 00:16:10.855 }, 00:16:10.855 { 00:16:10.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.855 "dma_device_type": 2 00:16:10.855 } 00:16:10.855 ], 00:16:10.855 "driver_specific": {} 00:16:10.855 } 00:16:10.855 ] 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.855 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.115 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.115 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.115 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.115 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.115 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.115 "name": "Existed_Raid", 00:16:11.115 "uuid": "1890c4ba-2977-46d8-8a7f-636dc34ed956", 00:16:11.115 "strip_size_kb": 64, 00:16:11.115 "state": "configuring", 00:16:11.115 "raid_level": "raid5f", 00:16:11.115 "superblock": true, 00:16:11.115 "num_base_bdevs": 4, 00:16:11.115 "num_base_bdevs_discovered": 3, 00:16:11.115 "num_base_bdevs_operational": 4, 00:16:11.115 "base_bdevs_list": [ 00:16:11.115 { 00:16:11.115 "name": "BaseBdev1", 00:16:11.115 "uuid": "e99d05aa-9b11-437c-ba98-364cef48182a", 00:16:11.115 "is_configured": true, 00:16:11.115 "data_offset": 2048, 00:16:11.115 "data_size": 63488 00:16:11.115 }, 00:16:11.115 { 00:16:11.115 "name": "BaseBdev2", 00:16:11.115 "uuid": "4505c590-ac8d-4a3b-8306-71b97df4cf4c", 00:16:11.115 "is_configured": true, 00:16:11.115 "data_offset": 2048, 00:16:11.115 "data_size": 63488 00:16:11.115 }, 00:16:11.115 { 00:16:11.115 "name": "BaseBdev3", 00:16:11.115 "uuid": "58ec5213-6fbc-406e-a698-bbfab266343f", 00:16:11.115 "is_configured": true, 00:16:11.115 "data_offset": 2048, 00:16:11.115 "data_size": 63488 00:16:11.115 }, 00:16:11.115 { 00:16:11.115 "name": "BaseBdev4", 00:16:11.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.115 "is_configured": false, 00:16:11.115 "data_offset": 0, 00:16:11.115 "data_size": 0 00:16:11.115 } 00:16:11.115 ] 00:16:11.115 }' 00:16:11.115 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.115 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.375 [2024-10-05 08:52:47.791308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:11.375 [2024-10-05 08:52:47.791554] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:11.375 [2024-10-05 08:52:47.791571] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:11.375 [2024-10-05 08:52:47.791827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:11.375 BaseBdev4 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.375 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.376 [2024-10-05 08:52:47.799043] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:11.376 [2024-10-05 08:52:47.799067] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:11.376 [2024-10-05 08:52:47.799228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.376 [ 00:16:11.376 { 00:16:11.376 "name": "BaseBdev4", 00:16:11.376 "aliases": [ 00:16:11.376 "1f9a60f6-0e37-434c-9c3a-cb367c2c419e" 00:16:11.376 ], 00:16:11.376 "product_name": "Malloc disk", 00:16:11.376 "block_size": 512, 00:16:11.376 "num_blocks": 65536, 00:16:11.376 "uuid": "1f9a60f6-0e37-434c-9c3a-cb367c2c419e", 00:16:11.376 "assigned_rate_limits": { 00:16:11.376 "rw_ios_per_sec": 0, 00:16:11.376 "rw_mbytes_per_sec": 0, 00:16:11.376 "r_mbytes_per_sec": 0, 00:16:11.376 "w_mbytes_per_sec": 0 00:16:11.376 }, 00:16:11.376 "claimed": true, 00:16:11.376 "claim_type": "exclusive_write", 00:16:11.376 "zoned": false, 00:16:11.376 "supported_io_types": { 00:16:11.376 "read": true, 00:16:11.376 "write": true, 00:16:11.376 "unmap": true, 00:16:11.376 "flush": true, 00:16:11.376 "reset": true, 00:16:11.376 "nvme_admin": false, 00:16:11.376 "nvme_io": false, 00:16:11.376 "nvme_io_md": false, 00:16:11.376 "write_zeroes": true, 00:16:11.376 "zcopy": true, 00:16:11.376 "get_zone_info": false, 00:16:11.376 "zone_management": false, 00:16:11.376 "zone_append": false, 00:16:11.376 "compare": false, 00:16:11.376 "compare_and_write": false, 00:16:11.376 "abort": true, 00:16:11.376 "seek_hole": false, 00:16:11.376 "seek_data": false, 00:16:11.376 "copy": true, 00:16:11.376 "nvme_iov_md": false 00:16:11.376 }, 00:16:11.376 "memory_domains": [ 00:16:11.376 { 00:16:11.376 "dma_device_id": "system", 00:16:11.376 "dma_device_type": 1 00:16:11.376 }, 00:16:11.376 { 00:16:11.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.376 "dma_device_type": 2 00:16:11.376 } 00:16:11.376 ], 00:16:11.376 "driver_specific": {} 00:16:11.376 } 00:16:11.376 ] 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.376 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.636 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.636 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.636 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.636 "name": "Existed_Raid", 00:16:11.636 "uuid": "1890c4ba-2977-46d8-8a7f-636dc34ed956", 00:16:11.636 "strip_size_kb": 64, 00:16:11.636 "state": "online", 00:16:11.636 "raid_level": "raid5f", 00:16:11.636 "superblock": true, 00:16:11.636 "num_base_bdevs": 4, 00:16:11.636 "num_base_bdevs_discovered": 4, 00:16:11.636 "num_base_bdevs_operational": 4, 00:16:11.636 "base_bdevs_list": [ 00:16:11.636 { 00:16:11.636 "name": "BaseBdev1", 00:16:11.636 "uuid": "e99d05aa-9b11-437c-ba98-364cef48182a", 00:16:11.636 "is_configured": true, 00:16:11.636 "data_offset": 2048, 00:16:11.636 "data_size": 63488 00:16:11.636 }, 00:16:11.636 { 00:16:11.636 "name": "BaseBdev2", 00:16:11.636 "uuid": "4505c590-ac8d-4a3b-8306-71b97df4cf4c", 00:16:11.636 "is_configured": true, 00:16:11.636 "data_offset": 2048, 00:16:11.636 "data_size": 63488 00:16:11.636 }, 00:16:11.636 { 00:16:11.636 "name": "BaseBdev3", 00:16:11.636 "uuid": "58ec5213-6fbc-406e-a698-bbfab266343f", 00:16:11.636 "is_configured": true, 00:16:11.636 "data_offset": 2048, 00:16:11.636 "data_size": 63488 00:16:11.636 }, 00:16:11.636 { 00:16:11.636 "name": "BaseBdev4", 00:16:11.636 "uuid": "1f9a60f6-0e37-434c-9c3a-cb367c2c419e", 00:16:11.636 "is_configured": true, 00:16:11.636 "data_offset": 2048, 00:16:11.636 "data_size": 63488 00:16:11.636 } 00:16:11.636 ] 00:16:11.636 }' 00:16:11.636 08:52:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.636 08:52:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.895 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:11.895 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:11.895 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:11.895 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:11.895 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:11.895 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:11.895 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:11.895 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:11.895 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.895 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.895 [2024-10-05 08:52:48.274270] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.895 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.895 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:11.895 "name": "Existed_Raid", 00:16:11.895 "aliases": [ 00:16:11.895 "1890c4ba-2977-46d8-8a7f-636dc34ed956" 00:16:11.895 ], 00:16:11.895 "product_name": "Raid Volume", 00:16:11.895 "block_size": 512, 00:16:11.895 "num_blocks": 190464, 00:16:11.895 "uuid": "1890c4ba-2977-46d8-8a7f-636dc34ed956", 00:16:11.895 "assigned_rate_limits": { 00:16:11.895 "rw_ios_per_sec": 0, 00:16:11.895 "rw_mbytes_per_sec": 0, 00:16:11.895 "r_mbytes_per_sec": 0, 00:16:11.895 "w_mbytes_per_sec": 0 00:16:11.895 }, 00:16:11.895 "claimed": false, 00:16:11.895 "zoned": false, 00:16:11.896 "supported_io_types": { 00:16:11.896 "read": true, 00:16:11.896 "write": true, 00:16:11.896 "unmap": false, 00:16:11.896 "flush": false, 00:16:11.896 "reset": true, 00:16:11.896 "nvme_admin": false, 00:16:11.896 "nvme_io": false, 00:16:11.896 "nvme_io_md": false, 00:16:11.896 "write_zeroes": true, 00:16:11.896 "zcopy": false, 00:16:11.896 "get_zone_info": false, 00:16:11.896 "zone_management": false, 00:16:11.896 "zone_append": false, 00:16:11.896 "compare": false, 00:16:11.896 "compare_and_write": false, 00:16:11.896 "abort": false, 00:16:11.896 "seek_hole": false, 00:16:11.896 "seek_data": false, 00:16:11.896 "copy": false, 00:16:11.896 "nvme_iov_md": false 00:16:11.896 }, 00:16:11.896 "driver_specific": { 00:16:11.896 "raid": { 00:16:11.896 "uuid": "1890c4ba-2977-46d8-8a7f-636dc34ed956", 00:16:11.896 "strip_size_kb": 64, 00:16:11.896 "state": "online", 00:16:11.896 "raid_level": "raid5f", 00:16:11.896 "superblock": true, 00:16:11.896 "num_base_bdevs": 4, 00:16:11.896 "num_base_bdevs_discovered": 4, 00:16:11.896 "num_base_bdevs_operational": 4, 00:16:11.896 "base_bdevs_list": [ 00:16:11.896 { 00:16:11.896 "name": "BaseBdev1", 00:16:11.896 "uuid": "e99d05aa-9b11-437c-ba98-364cef48182a", 00:16:11.896 "is_configured": true, 00:16:11.896 "data_offset": 2048, 00:16:11.896 "data_size": 63488 00:16:11.896 }, 00:16:11.896 { 00:16:11.896 "name": "BaseBdev2", 00:16:11.896 "uuid": "4505c590-ac8d-4a3b-8306-71b97df4cf4c", 00:16:11.896 "is_configured": true, 00:16:11.896 "data_offset": 2048, 00:16:11.896 "data_size": 63488 00:16:11.896 }, 00:16:11.896 { 00:16:11.896 "name": "BaseBdev3", 00:16:11.896 "uuid": "58ec5213-6fbc-406e-a698-bbfab266343f", 00:16:11.896 "is_configured": true, 00:16:11.896 "data_offset": 2048, 00:16:11.896 "data_size": 63488 00:16:11.896 }, 00:16:11.896 { 00:16:11.896 "name": "BaseBdev4", 00:16:11.896 "uuid": "1f9a60f6-0e37-434c-9c3a-cb367c2c419e", 00:16:11.896 "is_configured": true, 00:16:11.896 "data_offset": 2048, 00:16:11.896 "data_size": 63488 00:16:11.896 } 00:16:11.896 ] 00:16:11.896 } 00:16:11.896 } 00:16:11.896 }' 00:16:11.896 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:11.896 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:11.896 BaseBdev2 00:16:11.896 BaseBdev3 00:16:11.896 BaseBdev4' 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.155 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.155 [2024-10-05 08:52:48.585692] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.414 "name": "Existed_Raid", 00:16:12.414 "uuid": "1890c4ba-2977-46d8-8a7f-636dc34ed956", 00:16:12.414 "strip_size_kb": 64, 00:16:12.414 "state": "online", 00:16:12.414 "raid_level": "raid5f", 00:16:12.414 "superblock": true, 00:16:12.414 "num_base_bdevs": 4, 00:16:12.414 "num_base_bdevs_discovered": 3, 00:16:12.414 "num_base_bdevs_operational": 3, 00:16:12.414 "base_bdevs_list": [ 00:16:12.414 { 00:16:12.414 "name": null, 00:16:12.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.414 "is_configured": false, 00:16:12.414 "data_offset": 0, 00:16:12.414 "data_size": 63488 00:16:12.414 }, 00:16:12.414 { 00:16:12.414 "name": "BaseBdev2", 00:16:12.414 "uuid": "4505c590-ac8d-4a3b-8306-71b97df4cf4c", 00:16:12.414 "is_configured": true, 00:16:12.414 "data_offset": 2048, 00:16:12.414 "data_size": 63488 00:16:12.414 }, 00:16:12.414 { 00:16:12.414 "name": "BaseBdev3", 00:16:12.414 "uuid": "58ec5213-6fbc-406e-a698-bbfab266343f", 00:16:12.414 "is_configured": true, 00:16:12.414 "data_offset": 2048, 00:16:12.414 "data_size": 63488 00:16:12.414 }, 00:16:12.414 { 00:16:12.414 "name": "BaseBdev4", 00:16:12.414 "uuid": "1f9a60f6-0e37-434c-9c3a-cb367c2c419e", 00:16:12.414 "is_configured": true, 00:16:12.414 "data_offset": 2048, 00:16:12.414 "data_size": 63488 00:16:12.414 } 00:16:12.414 ] 00:16:12.414 }' 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.414 08:52:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.673 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:12.673 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:12.673 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.673 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.673 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:12.673 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.673 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.673 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:12.673 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:12.673 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:12.674 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.674 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.674 [2024-10-05 08:52:49.125655] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:12.674 [2024-10-05 08:52:49.125825] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.932 [2024-10-05 08:52:49.213149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.932 [2024-10-05 08:52:49.273077] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:12.932 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.193 [2024-10-05 08:52:49.421584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:13.193 [2024-10-05 08:52:49.421638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.193 BaseBdev2 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.193 [ 00:16:13.193 { 00:16:13.193 "name": "BaseBdev2", 00:16:13.193 "aliases": [ 00:16:13.193 "4c421fc1-c41b-41fc-b0a4-49ecc3333ec5" 00:16:13.193 ], 00:16:13.193 "product_name": "Malloc disk", 00:16:13.193 "block_size": 512, 00:16:13.193 "num_blocks": 65536, 00:16:13.193 "uuid": "4c421fc1-c41b-41fc-b0a4-49ecc3333ec5", 00:16:13.193 "assigned_rate_limits": { 00:16:13.193 "rw_ios_per_sec": 0, 00:16:13.193 "rw_mbytes_per_sec": 0, 00:16:13.193 "r_mbytes_per_sec": 0, 00:16:13.193 "w_mbytes_per_sec": 0 00:16:13.193 }, 00:16:13.193 "claimed": false, 00:16:13.193 "zoned": false, 00:16:13.193 "supported_io_types": { 00:16:13.193 "read": true, 00:16:13.193 "write": true, 00:16:13.193 "unmap": true, 00:16:13.193 "flush": true, 00:16:13.193 "reset": true, 00:16:13.193 "nvme_admin": false, 00:16:13.193 "nvme_io": false, 00:16:13.193 "nvme_io_md": false, 00:16:13.193 "write_zeroes": true, 00:16:13.193 "zcopy": true, 00:16:13.193 "get_zone_info": false, 00:16:13.193 "zone_management": false, 00:16:13.193 "zone_append": false, 00:16:13.193 "compare": false, 00:16:13.193 "compare_and_write": false, 00:16:13.193 "abort": true, 00:16:13.193 "seek_hole": false, 00:16:13.193 "seek_data": false, 00:16:13.193 "copy": true, 00:16:13.193 "nvme_iov_md": false 00:16:13.193 }, 00:16:13.193 "memory_domains": [ 00:16:13.193 { 00:16:13.193 "dma_device_id": "system", 00:16:13.193 "dma_device_type": 1 00:16:13.193 }, 00:16:13.193 { 00:16:13.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.193 "dma_device_type": 2 00:16:13.193 } 00:16:13.193 ], 00:16:13.193 "driver_specific": {} 00:16:13.193 } 00:16:13.193 ] 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.193 BaseBdev3 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.193 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.454 [ 00:16:13.454 { 00:16:13.454 "name": "BaseBdev3", 00:16:13.454 "aliases": [ 00:16:13.454 "578c81b0-c984-4aae-824a-83013a408ed3" 00:16:13.454 ], 00:16:13.454 "product_name": "Malloc disk", 00:16:13.454 "block_size": 512, 00:16:13.454 "num_blocks": 65536, 00:16:13.454 "uuid": "578c81b0-c984-4aae-824a-83013a408ed3", 00:16:13.454 "assigned_rate_limits": { 00:16:13.454 "rw_ios_per_sec": 0, 00:16:13.454 "rw_mbytes_per_sec": 0, 00:16:13.454 "r_mbytes_per_sec": 0, 00:16:13.454 "w_mbytes_per_sec": 0 00:16:13.454 }, 00:16:13.454 "claimed": false, 00:16:13.454 "zoned": false, 00:16:13.454 "supported_io_types": { 00:16:13.454 "read": true, 00:16:13.454 "write": true, 00:16:13.454 "unmap": true, 00:16:13.454 "flush": true, 00:16:13.454 "reset": true, 00:16:13.454 "nvme_admin": false, 00:16:13.454 "nvme_io": false, 00:16:13.454 "nvme_io_md": false, 00:16:13.454 "write_zeroes": true, 00:16:13.454 "zcopy": true, 00:16:13.454 "get_zone_info": false, 00:16:13.454 "zone_management": false, 00:16:13.454 "zone_append": false, 00:16:13.454 "compare": false, 00:16:13.454 "compare_and_write": false, 00:16:13.454 "abort": true, 00:16:13.454 "seek_hole": false, 00:16:13.454 "seek_data": false, 00:16:13.454 "copy": true, 00:16:13.454 "nvme_iov_md": false 00:16:13.454 }, 00:16:13.454 "memory_domains": [ 00:16:13.454 { 00:16:13.454 "dma_device_id": "system", 00:16:13.454 "dma_device_type": 1 00:16:13.454 }, 00:16:13.454 { 00:16:13.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.454 "dma_device_type": 2 00:16:13.454 } 00:16:13.454 ], 00:16:13.454 "driver_specific": {} 00:16:13.454 } 00:16:13.454 ] 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.454 BaseBdev4 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.454 [ 00:16:13.454 { 00:16:13.454 "name": "BaseBdev4", 00:16:13.454 "aliases": [ 00:16:13.454 "ac1b3b26-d28f-4270-a5f5-8f8c712e74a8" 00:16:13.454 ], 00:16:13.454 "product_name": "Malloc disk", 00:16:13.454 "block_size": 512, 00:16:13.454 "num_blocks": 65536, 00:16:13.454 "uuid": "ac1b3b26-d28f-4270-a5f5-8f8c712e74a8", 00:16:13.454 "assigned_rate_limits": { 00:16:13.454 "rw_ios_per_sec": 0, 00:16:13.454 "rw_mbytes_per_sec": 0, 00:16:13.454 "r_mbytes_per_sec": 0, 00:16:13.454 "w_mbytes_per_sec": 0 00:16:13.454 }, 00:16:13.454 "claimed": false, 00:16:13.454 "zoned": false, 00:16:13.454 "supported_io_types": { 00:16:13.454 "read": true, 00:16:13.454 "write": true, 00:16:13.454 "unmap": true, 00:16:13.454 "flush": true, 00:16:13.454 "reset": true, 00:16:13.454 "nvme_admin": false, 00:16:13.454 "nvme_io": false, 00:16:13.454 "nvme_io_md": false, 00:16:13.454 "write_zeroes": true, 00:16:13.454 "zcopy": true, 00:16:13.454 "get_zone_info": false, 00:16:13.454 "zone_management": false, 00:16:13.454 "zone_append": false, 00:16:13.454 "compare": false, 00:16:13.454 "compare_and_write": false, 00:16:13.454 "abort": true, 00:16:13.454 "seek_hole": false, 00:16:13.454 "seek_data": false, 00:16:13.454 "copy": true, 00:16:13.454 "nvme_iov_md": false 00:16:13.454 }, 00:16:13.454 "memory_domains": [ 00:16:13.454 { 00:16:13.454 "dma_device_id": "system", 00:16:13.454 "dma_device_type": 1 00:16:13.454 }, 00:16:13.454 { 00:16:13.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.454 "dma_device_type": 2 00:16:13.454 } 00:16:13.454 ], 00:16:13.454 "driver_specific": {} 00:16:13.454 } 00:16:13.454 ] 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.454 [2024-10-05 08:52:49.776098] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.454 [2024-10-05 08:52:49.776153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.454 [2024-10-05 08:52:49.776173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.454 [2024-10-05 08:52:49.777907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:13.454 [2024-10-05 08:52:49.777969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.454 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.454 "name": "Existed_Raid", 00:16:13.454 "uuid": "c8ef26cf-ac9e-4682-bcd6-2f1d055fe427", 00:16:13.454 "strip_size_kb": 64, 00:16:13.454 "state": "configuring", 00:16:13.454 "raid_level": "raid5f", 00:16:13.454 "superblock": true, 00:16:13.454 "num_base_bdevs": 4, 00:16:13.454 "num_base_bdevs_discovered": 3, 00:16:13.454 "num_base_bdevs_operational": 4, 00:16:13.454 "base_bdevs_list": [ 00:16:13.454 { 00:16:13.454 "name": "BaseBdev1", 00:16:13.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.454 "is_configured": false, 00:16:13.454 "data_offset": 0, 00:16:13.454 "data_size": 0 00:16:13.454 }, 00:16:13.454 { 00:16:13.454 "name": "BaseBdev2", 00:16:13.454 "uuid": "4c421fc1-c41b-41fc-b0a4-49ecc3333ec5", 00:16:13.455 "is_configured": true, 00:16:13.455 "data_offset": 2048, 00:16:13.455 "data_size": 63488 00:16:13.455 }, 00:16:13.455 { 00:16:13.455 "name": "BaseBdev3", 00:16:13.455 "uuid": "578c81b0-c984-4aae-824a-83013a408ed3", 00:16:13.455 "is_configured": true, 00:16:13.455 "data_offset": 2048, 00:16:13.455 "data_size": 63488 00:16:13.455 }, 00:16:13.455 { 00:16:13.455 "name": "BaseBdev4", 00:16:13.455 "uuid": "ac1b3b26-d28f-4270-a5f5-8f8c712e74a8", 00:16:13.455 "is_configured": true, 00:16:13.455 "data_offset": 2048, 00:16:13.455 "data_size": 63488 00:16:13.455 } 00:16:13.455 ] 00:16:13.455 }' 00:16:13.455 08:52:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.455 08:52:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 [2024-10-05 08:52:50.267426] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.024 "name": "Existed_Raid", 00:16:14.024 "uuid": "c8ef26cf-ac9e-4682-bcd6-2f1d055fe427", 00:16:14.024 "strip_size_kb": 64, 00:16:14.024 "state": "configuring", 00:16:14.024 "raid_level": "raid5f", 00:16:14.024 "superblock": true, 00:16:14.024 "num_base_bdevs": 4, 00:16:14.024 "num_base_bdevs_discovered": 2, 00:16:14.024 "num_base_bdevs_operational": 4, 00:16:14.024 "base_bdevs_list": [ 00:16:14.024 { 00:16:14.024 "name": "BaseBdev1", 00:16:14.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.024 "is_configured": false, 00:16:14.024 "data_offset": 0, 00:16:14.024 "data_size": 0 00:16:14.024 }, 00:16:14.024 { 00:16:14.024 "name": null, 00:16:14.024 "uuid": "4c421fc1-c41b-41fc-b0a4-49ecc3333ec5", 00:16:14.024 "is_configured": false, 00:16:14.024 "data_offset": 0, 00:16:14.024 "data_size": 63488 00:16:14.024 }, 00:16:14.024 { 00:16:14.024 "name": "BaseBdev3", 00:16:14.024 "uuid": "578c81b0-c984-4aae-824a-83013a408ed3", 00:16:14.024 "is_configured": true, 00:16:14.024 "data_offset": 2048, 00:16:14.024 "data_size": 63488 00:16:14.024 }, 00:16:14.024 { 00:16:14.024 "name": "BaseBdev4", 00:16:14.024 "uuid": "ac1b3b26-d28f-4270-a5f5-8f8c712e74a8", 00:16:14.024 "is_configured": true, 00:16:14.024 "data_offset": 2048, 00:16:14.024 "data_size": 63488 00:16:14.024 } 00:16:14.024 ] 00:16:14.024 }' 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.024 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.284 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.284 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:14.284 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.284 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.284 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.284 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:14.284 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:14.284 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.284 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.544 [2024-10-05 08:52:50.789811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.544 BaseBdev1 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.544 [ 00:16:14.544 { 00:16:14.544 "name": "BaseBdev1", 00:16:14.544 "aliases": [ 00:16:14.544 "0047d7be-4e70-428d-80b1-36eeaf401c1a" 00:16:14.544 ], 00:16:14.544 "product_name": "Malloc disk", 00:16:14.544 "block_size": 512, 00:16:14.544 "num_blocks": 65536, 00:16:14.544 "uuid": "0047d7be-4e70-428d-80b1-36eeaf401c1a", 00:16:14.544 "assigned_rate_limits": { 00:16:14.544 "rw_ios_per_sec": 0, 00:16:14.544 "rw_mbytes_per_sec": 0, 00:16:14.544 "r_mbytes_per_sec": 0, 00:16:14.544 "w_mbytes_per_sec": 0 00:16:14.544 }, 00:16:14.544 "claimed": true, 00:16:14.544 "claim_type": "exclusive_write", 00:16:14.544 "zoned": false, 00:16:14.544 "supported_io_types": { 00:16:14.544 "read": true, 00:16:14.544 "write": true, 00:16:14.544 "unmap": true, 00:16:14.544 "flush": true, 00:16:14.544 "reset": true, 00:16:14.544 "nvme_admin": false, 00:16:14.544 "nvme_io": false, 00:16:14.544 "nvme_io_md": false, 00:16:14.544 "write_zeroes": true, 00:16:14.544 "zcopy": true, 00:16:14.544 "get_zone_info": false, 00:16:14.544 "zone_management": false, 00:16:14.544 "zone_append": false, 00:16:14.544 "compare": false, 00:16:14.544 "compare_and_write": false, 00:16:14.544 "abort": true, 00:16:14.544 "seek_hole": false, 00:16:14.544 "seek_data": false, 00:16:14.544 "copy": true, 00:16:14.544 "nvme_iov_md": false 00:16:14.544 }, 00:16:14.544 "memory_domains": [ 00:16:14.544 { 00:16:14.544 "dma_device_id": "system", 00:16:14.544 "dma_device_type": 1 00:16:14.544 }, 00:16:14.544 { 00:16:14.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.544 "dma_device_type": 2 00:16:14.544 } 00:16:14.544 ], 00:16:14.544 "driver_specific": {} 00:16:14.544 } 00:16:14.544 ] 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.544 "name": "Existed_Raid", 00:16:14.544 "uuid": "c8ef26cf-ac9e-4682-bcd6-2f1d055fe427", 00:16:14.544 "strip_size_kb": 64, 00:16:14.544 "state": "configuring", 00:16:14.544 "raid_level": "raid5f", 00:16:14.544 "superblock": true, 00:16:14.544 "num_base_bdevs": 4, 00:16:14.544 "num_base_bdevs_discovered": 3, 00:16:14.544 "num_base_bdevs_operational": 4, 00:16:14.544 "base_bdevs_list": [ 00:16:14.544 { 00:16:14.544 "name": "BaseBdev1", 00:16:14.544 "uuid": "0047d7be-4e70-428d-80b1-36eeaf401c1a", 00:16:14.544 "is_configured": true, 00:16:14.544 "data_offset": 2048, 00:16:14.544 "data_size": 63488 00:16:14.544 }, 00:16:14.544 { 00:16:14.544 "name": null, 00:16:14.544 "uuid": "4c421fc1-c41b-41fc-b0a4-49ecc3333ec5", 00:16:14.544 "is_configured": false, 00:16:14.544 "data_offset": 0, 00:16:14.544 "data_size": 63488 00:16:14.544 }, 00:16:14.544 { 00:16:14.544 "name": "BaseBdev3", 00:16:14.544 "uuid": "578c81b0-c984-4aae-824a-83013a408ed3", 00:16:14.544 "is_configured": true, 00:16:14.544 "data_offset": 2048, 00:16:14.544 "data_size": 63488 00:16:14.544 }, 00:16:14.544 { 00:16:14.544 "name": "BaseBdev4", 00:16:14.544 "uuid": "ac1b3b26-d28f-4270-a5f5-8f8c712e74a8", 00:16:14.544 "is_configured": true, 00:16:14.544 "data_offset": 2048, 00:16:14.544 "data_size": 63488 00:16:14.544 } 00:16:14.544 ] 00:16:14.544 }' 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.544 08:52:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.114 [2024-10-05 08:52:51.336948] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.114 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.114 "name": "Existed_Raid", 00:16:15.114 "uuid": "c8ef26cf-ac9e-4682-bcd6-2f1d055fe427", 00:16:15.114 "strip_size_kb": 64, 00:16:15.114 "state": "configuring", 00:16:15.114 "raid_level": "raid5f", 00:16:15.114 "superblock": true, 00:16:15.114 "num_base_bdevs": 4, 00:16:15.114 "num_base_bdevs_discovered": 2, 00:16:15.115 "num_base_bdevs_operational": 4, 00:16:15.115 "base_bdevs_list": [ 00:16:15.115 { 00:16:15.115 "name": "BaseBdev1", 00:16:15.115 "uuid": "0047d7be-4e70-428d-80b1-36eeaf401c1a", 00:16:15.115 "is_configured": true, 00:16:15.115 "data_offset": 2048, 00:16:15.115 "data_size": 63488 00:16:15.115 }, 00:16:15.115 { 00:16:15.115 "name": null, 00:16:15.115 "uuid": "4c421fc1-c41b-41fc-b0a4-49ecc3333ec5", 00:16:15.115 "is_configured": false, 00:16:15.115 "data_offset": 0, 00:16:15.115 "data_size": 63488 00:16:15.115 }, 00:16:15.115 { 00:16:15.115 "name": null, 00:16:15.115 "uuid": "578c81b0-c984-4aae-824a-83013a408ed3", 00:16:15.115 "is_configured": false, 00:16:15.115 "data_offset": 0, 00:16:15.115 "data_size": 63488 00:16:15.115 }, 00:16:15.115 { 00:16:15.115 "name": "BaseBdev4", 00:16:15.115 "uuid": "ac1b3b26-d28f-4270-a5f5-8f8c712e74a8", 00:16:15.115 "is_configured": true, 00:16:15.115 "data_offset": 2048, 00:16:15.115 "data_size": 63488 00:16:15.115 } 00:16:15.115 ] 00:16:15.115 }' 00:16:15.115 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.115 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.374 [2024-10-05 08:52:51.808131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.374 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.633 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.633 "name": "Existed_Raid", 00:16:15.633 "uuid": "c8ef26cf-ac9e-4682-bcd6-2f1d055fe427", 00:16:15.634 "strip_size_kb": 64, 00:16:15.634 "state": "configuring", 00:16:15.634 "raid_level": "raid5f", 00:16:15.634 "superblock": true, 00:16:15.634 "num_base_bdevs": 4, 00:16:15.634 "num_base_bdevs_discovered": 3, 00:16:15.634 "num_base_bdevs_operational": 4, 00:16:15.634 "base_bdevs_list": [ 00:16:15.634 { 00:16:15.634 "name": "BaseBdev1", 00:16:15.634 "uuid": "0047d7be-4e70-428d-80b1-36eeaf401c1a", 00:16:15.634 "is_configured": true, 00:16:15.634 "data_offset": 2048, 00:16:15.634 "data_size": 63488 00:16:15.634 }, 00:16:15.634 { 00:16:15.634 "name": null, 00:16:15.634 "uuid": "4c421fc1-c41b-41fc-b0a4-49ecc3333ec5", 00:16:15.634 "is_configured": false, 00:16:15.634 "data_offset": 0, 00:16:15.634 "data_size": 63488 00:16:15.634 }, 00:16:15.634 { 00:16:15.634 "name": "BaseBdev3", 00:16:15.634 "uuid": "578c81b0-c984-4aae-824a-83013a408ed3", 00:16:15.634 "is_configured": true, 00:16:15.634 "data_offset": 2048, 00:16:15.634 "data_size": 63488 00:16:15.634 }, 00:16:15.634 { 00:16:15.634 "name": "BaseBdev4", 00:16:15.634 "uuid": "ac1b3b26-d28f-4270-a5f5-8f8c712e74a8", 00:16:15.634 "is_configured": true, 00:16:15.634 "data_offset": 2048, 00:16:15.634 "data_size": 63488 00:16:15.634 } 00:16:15.634 ] 00:16:15.634 }' 00:16:15.634 08:52:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.634 08:52:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.893 [2024-10-05 08:52:52.259341] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.893 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.152 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.152 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.152 "name": "Existed_Raid", 00:16:16.152 "uuid": "c8ef26cf-ac9e-4682-bcd6-2f1d055fe427", 00:16:16.152 "strip_size_kb": 64, 00:16:16.152 "state": "configuring", 00:16:16.152 "raid_level": "raid5f", 00:16:16.152 "superblock": true, 00:16:16.152 "num_base_bdevs": 4, 00:16:16.152 "num_base_bdevs_discovered": 2, 00:16:16.152 "num_base_bdevs_operational": 4, 00:16:16.152 "base_bdevs_list": [ 00:16:16.152 { 00:16:16.152 "name": null, 00:16:16.152 "uuid": "0047d7be-4e70-428d-80b1-36eeaf401c1a", 00:16:16.152 "is_configured": false, 00:16:16.152 "data_offset": 0, 00:16:16.152 "data_size": 63488 00:16:16.152 }, 00:16:16.152 { 00:16:16.152 "name": null, 00:16:16.153 "uuid": "4c421fc1-c41b-41fc-b0a4-49ecc3333ec5", 00:16:16.153 "is_configured": false, 00:16:16.153 "data_offset": 0, 00:16:16.153 "data_size": 63488 00:16:16.153 }, 00:16:16.153 { 00:16:16.153 "name": "BaseBdev3", 00:16:16.153 "uuid": "578c81b0-c984-4aae-824a-83013a408ed3", 00:16:16.153 "is_configured": true, 00:16:16.153 "data_offset": 2048, 00:16:16.153 "data_size": 63488 00:16:16.153 }, 00:16:16.153 { 00:16:16.153 "name": "BaseBdev4", 00:16:16.153 "uuid": "ac1b3b26-d28f-4270-a5f5-8f8c712e74a8", 00:16:16.153 "is_configured": true, 00:16:16.153 "data_offset": 2048, 00:16:16.153 "data_size": 63488 00:16:16.153 } 00:16:16.153 ] 00:16:16.153 }' 00:16:16.153 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.153 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.412 [2024-10-05 08:52:52.872839] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.412 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.413 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.413 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.413 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.672 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.672 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.672 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.672 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.672 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.672 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.672 "name": "Existed_Raid", 00:16:16.672 "uuid": "c8ef26cf-ac9e-4682-bcd6-2f1d055fe427", 00:16:16.672 "strip_size_kb": 64, 00:16:16.672 "state": "configuring", 00:16:16.672 "raid_level": "raid5f", 00:16:16.672 "superblock": true, 00:16:16.672 "num_base_bdevs": 4, 00:16:16.672 "num_base_bdevs_discovered": 3, 00:16:16.672 "num_base_bdevs_operational": 4, 00:16:16.672 "base_bdevs_list": [ 00:16:16.672 { 00:16:16.672 "name": null, 00:16:16.672 "uuid": "0047d7be-4e70-428d-80b1-36eeaf401c1a", 00:16:16.673 "is_configured": false, 00:16:16.673 "data_offset": 0, 00:16:16.673 "data_size": 63488 00:16:16.673 }, 00:16:16.673 { 00:16:16.673 "name": "BaseBdev2", 00:16:16.673 "uuid": "4c421fc1-c41b-41fc-b0a4-49ecc3333ec5", 00:16:16.673 "is_configured": true, 00:16:16.673 "data_offset": 2048, 00:16:16.673 "data_size": 63488 00:16:16.673 }, 00:16:16.673 { 00:16:16.673 "name": "BaseBdev3", 00:16:16.673 "uuid": "578c81b0-c984-4aae-824a-83013a408ed3", 00:16:16.673 "is_configured": true, 00:16:16.673 "data_offset": 2048, 00:16:16.673 "data_size": 63488 00:16:16.673 }, 00:16:16.673 { 00:16:16.673 "name": "BaseBdev4", 00:16:16.673 "uuid": "ac1b3b26-d28f-4270-a5f5-8f8c712e74a8", 00:16:16.673 "is_configured": true, 00:16:16.673 "data_offset": 2048, 00:16:16.673 "data_size": 63488 00:16:16.673 } 00:16:16.673 ] 00:16:16.673 }' 00:16:16.673 08:52:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.673 08:52:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0047d7be-4e70-428d-80b1-36eeaf401c1a 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.933 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.193 [2024-10-05 08:52:53.427513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:17.193 [2024-10-05 08:52:53.427720] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:17.193 [2024-10-05 08:52:53.427732] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:17.193 [2024-10-05 08:52:53.427981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:17.193 NewBaseBdev 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.193 [2024-10-05 08:52:53.434994] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:17.193 [2024-10-05 08:52:53.435016] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:17.193 [2024-10-05 08:52:53.435158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.193 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.193 [ 00:16:17.193 { 00:16:17.193 "name": "NewBaseBdev", 00:16:17.193 "aliases": [ 00:16:17.193 "0047d7be-4e70-428d-80b1-36eeaf401c1a" 00:16:17.193 ], 00:16:17.193 "product_name": "Malloc disk", 00:16:17.193 "block_size": 512, 00:16:17.193 "num_blocks": 65536, 00:16:17.193 "uuid": "0047d7be-4e70-428d-80b1-36eeaf401c1a", 00:16:17.193 "assigned_rate_limits": { 00:16:17.193 "rw_ios_per_sec": 0, 00:16:17.193 "rw_mbytes_per_sec": 0, 00:16:17.193 "r_mbytes_per_sec": 0, 00:16:17.193 "w_mbytes_per_sec": 0 00:16:17.193 }, 00:16:17.193 "claimed": true, 00:16:17.193 "claim_type": "exclusive_write", 00:16:17.193 "zoned": false, 00:16:17.193 "supported_io_types": { 00:16:17.193 "read": true, 00:16:17.193 "write": true, 00:16:17.193 "unmap": true, 00:16:17.193 "flush": true, 00:16:17.193 "reset": true, 00:16:17.193 "nvme_admin": false, 00:16:17.193 "nvme_io": false, 00:16:17.193 "nvme_io_md": false, 00:16:17.193 "write_zeroes": true, 00:16:17.193 "zcopy": true, 00:16:17.193 "get_zone_info": false, 00:16:17.193 "zone_management": false, 00:16:17.193 "zone_append": false, 00:16:17.193 "compare": false, 00:16:17.193 "compare_and_write": false, 00:16:17.193 "abort": true, 00:16:17.193 "seek_hole": false, 00:16:17.193 "seek_data": false, 00:16:17.193 "copy": true, 00:16:17.193 "nvme_iov_md": false 00:16:17.193 }, 00:16:17.193 "memory_domains": [ 00:16:17.193 { 00:16:17.193 "dma_device_id": "system", 00:16:17.193 "dma_device_type": 1 00:16:17.193 }, 00:16:17.193 { 00:16:17.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.193 "dma_device_type": 2 00:16:17.193 } 00:16:17.194 ], 00:16:17.194 "driver_specific": {} 00:16:17.194 } 00:16:17.194 ] 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.194 "name": "Existed_Raid", 00:16:17.194 "uuid": "c8ef26cf-ac9e-4682-bcd6-2f1d055fe427", 00:16:17.194 "strip_size_kb": 64, 00:16:17.194 "state": "online", 00:16:17.194 "raid_level": "raid5f", 00:16:17.194 "superblock": true, 00:16:17.194 "num_base_bdevs": 4, 00:16:17.194 "num_base_bdevs_discovered": 4, 00:16:17.194 "num_base_bdevs_operational": 4, 00:16:17.194 "base_bdevs_list": [ 00:16:17.194 { 00:16:17.194 "name": "NewBaseBdev", 00:16:17.194 "uuid": "0047d7be-4e70-428d-80b1-36eeaf401c1a", 00:16:17.194 "is_configured": true, 00:16:17.194 "data_offset": 2048, 00:16:17.194 "data_size": 63488 00:16:17.194 }, 00:16:17.194 { 00:16:17.194 "name": "BaseBdev2", 00:16:17.194 "uuid": "4c421fc1-c41b-41fc-b0a4-49ecc3333ec5", 00:16:17.194 "is_configured": true, 00:16:17.194 "data_offset": 2048, 00:16:17.194 "data_size": 63488 00:16:17.194 }, 00:16:17.194 { 00:16:17.194 "name": "BaseBdev3", 00:16:17.194 "uuid": "578c81b0-c984-4aae-824a-83013a408ed3", 00:16:17.194 "is_configured": true, 00:16:17.194 "data_offset": 2048, 00:16:17.194 "data_size": 63488 00:16:17.194 }, 00:16:17.194 { 00:16:17.194 "name": "BaseBdev4", 00:16:17.194 "uuid": "ac1b3b26-d28f-4270-a5f5-8f8c712e74a8", 00:16:17.194 "is_configured": true, 00:16:17.194 "data_offset": 2048, 00:16:17.194 "data_size": 63488 00:16:17.194 } 00:16:17.194 ] 00:16:17.194 }' 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.194 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.454 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:17.454 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:17.454 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:17.454 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:17.454 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:17.454 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:17.454 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:17.454 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:17.454 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.454 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.454 [2024-10-05 08:52:53.894315] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.454 08:52:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.714 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:17.714 "name": "Existed_Raid", 00:16:17.714 "aliases": [ 00:16:17.714 "c8ef26cf-ac9e-4682-bcd6-2f1d055fe427" 00:16:17.714 ], 00:16:17.714 "product_name": "Raid Volume", 00:16:17.714 "block_size": 512, 00:16:17.714 "num_blocks": 190464, 00:16:17.714 "uuid": "c8ef26cf-ac9e-4682-bcd6-2f1d055fe427", 00:16:17.714 "assigned_rate_limits": { 00:16:17.714 "rw_ios_per_sec": 0, 00:16:17.714 "rw_mbytes_per_sec": 0, 00:16:17.714 "r_mbytes_per_sec": 0, 00:16:17.714 "w_mbytes_per_sec": 0 00:16:17.714 }, 00:16:17.714 "claimed": false, 00:16:17.714 "zoned": false, 00:16:17.714 "supported_io_types": { 00:16:17.714 "read": true, 00:16:17.714 "write": true, 00:16:17.714 "unmap": false, 00:16:17.714 "flush": false, 00:16:17.714 "reset": true, 00:16:17.714 "nvme_admin": false, 00:16:17.714 "nvme_io": false, 00:16:17.714 "nvme_io_md": false, 00:16:17.714 "write_zeroes": true, 00:16:17.714 "zcopy": false, 00:16:17.714 "get_zone_info": false, 00:16:17.714 "zone_management": false, 00:16:17.714 "zone_append": false, 00:16:17.714 "compare": false, 00:16:17.714 "compare_and_write": false, 00:16:17.714 "abort": false, 00:16:17.714 "seek_hole": false, 00:16:17.714 "seek_data": false, 00:16:17.714 "copy": false, 00:16:17.714 "nvme_iov_md": false 00:16:17.714 }, 00:16:17.714 "driver_specific": { 00:16:17.714 "raid": { 00:16:17.714 "uuid": "c8ef26cf-ac9e-4682-bcd6-2f1d055fe427", 00:16:17.714 "strip_size_kb": 64, 00:16:17.714 "state": "online", 00:16:17.714 "raid_level": "raid5f", 00:16:17.714 "superblock": true, 00:16:17.714 "num_base_bdevs": 4, 00:16:17.714 "num_base_bdevs_discovered": 4, 00:16:17.714 "num_base_bdevs_operational": 4, 00:16:17.714 "base_bdevs_list": [ 00:16:17.714 { 00:16:17.714 "name": "NewBaseBdev", 00:16:17.714 "uuid": "0047d7be-4e70-428d-80b1-36eeaf401c1a", 00:16:17.714 "is_configured": true, 00:16:17.714 "data_offset": 2048, 00:16:17.714 "data_size": 63488 00:16:17.714 }, 00:16:17.714 { 00:16:17.714 "name": "BaseBdev2", 00:16:17.714 "uuid": "4c421fc1-c41b-41fc-b0a4-49ecc3333ec5", 00:16:17.714 "is_configured": true, 00:16:17.714 "data_offset": 2048, 00:16:17.714 "data_size": 63488 00:16:17.714 }, 00:16:17.714 { 00:16:17.714 "name": "BaseBdev3", 00:16:17.714 "uuid": "578c81b0-c984-4aae-824a-83013a408ed3", 00:16:17.714 "is_configured": true, 00:16:17.714 "data_offset": 2048, 00:16:17.714 "data_size": 63488 00:16:17.714 }, 00:16:17.714 { 00:16:17.714 "name": "BaseBdev4", 00:16:17.714 "uuid": "ac1b3b26-d28f-4270-a5f5-8f8c712e74a8", 00:16:17.714 "is_configured": true, 00:16:17.714 "data_offset": 2048, 00:16:17.714 "data_size": 63488 00:16:17.714 } 00:16:17.714 ] 00:16:17.714 } 00:16:17.714 } 00:16:17.714 }' 00:16:17.714 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.714 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:17.714 BaseBdev2 00:16:17.714 BaseBdev3 00:16:17.714 BaseBdev4' 00:16:17.714 08:52:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.714 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.975 [2024-10-05 08:52:54.237523] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.975 [2024-10-05 08:52:54.237599] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.975 [2024-10-05 08:52:54.237680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.975 [2024-10-05 08:52:54.237986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.975 [2024-10-05 08:52:54.238045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80238 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80238 ']' 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80238 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80238 00:16:17.975 killing process with pid 80238 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80238' 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80238 00:16:17.975 [2024-10-05 08:52:54.277834] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:17.975 08:52:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80238 00:16:18.235 [2024-10-05 08:52:54.649703] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.617 08:52:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:19.617 00:16:19.617 real 0m11.573s 00:16:19.617 user 0m18.306s 00:16:19.617 sys 0m2.222s 00:16:19.617 08:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.617 ************************************ 00:16:19.617 END TEST raid5f_state_function_test_sb 00:16:19.617 ************************************ 00:16:19.617 08:52:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.617 08:52:55 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:19.617 08:52:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:19.617 08:52:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.617 08:52:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.617 ************************************ 00:16:19.617 START TEST raid5f_superblock_test 00:16:19.617 ************************************ 00:16:19.617 08:52:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:16:19.617 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:19.617 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:19.617 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:19.617 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:19.617 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:19.617 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:19.617 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80844 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80844 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 80844 ']' 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.618 08:52:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.618 [2024-10-05 08:52:56.022373] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:16:19.618 [2024-10-05 08:52:56.022489] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80844 ] 00:16:19.878 [2024-10-05 08:52:56.183839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.139 [2024-10-05 08:52:56.368230] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.139 [2024-10-05 08:52:56.552852] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.139 [2024-10-05 08:52:56.553010] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.399 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.659 malloc1 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.659 [2024-10-05 08:52:56.876966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:20.659 [2024-10-05 08:52:56.877158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.659 [2024-10-05 08:52:56.877202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:20.659 [2024-10-05 08:52:56.877240] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.659 [2024-10-05 08:52:56.879237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.659 [2024-10-05 08:52:56.879308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:20.659 pt1 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.659 malloc2 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.659 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.660 [2024-10-05 08:52:56.948213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:20.660 [2024-10-05 08:52:56.948320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.660 [2024-10-05 08:52:56.948355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:20.660 [2024-10-05 08:52:56.948381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.660 [2024-10-05 08:52:56.950285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.660 [2024-10-05 08:52:56.950357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:20.660 pt2 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.660 malloc3 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.660 08:52:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.660 [2024-10-05 08:52:57.005659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:20.660 [2024-10-05 08:52:57.005763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.660 [2024-10-05 08:52:57.005800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:20.660 [2024-10-05 08:52:57.005824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.660 [2024-10-05 08:52:57.007699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.660 pt3 00:16:20.660 [2024-10-05 08:52:57.007770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.660 malloc4 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.660 [2024-10-05 08:52:57.062342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:20.660 [2024-10-05 08:52:57.062443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.660 [2024-10-05 08:52:57.062476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:20.660 [2024-10-05 08:52:57.062503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.660 [2024-10-05 08:52:57.064367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.660 [2024-10-05 08:52:57.064438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:20.660 pt4 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.660 [2024-10-05 08:52:57.074378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:20.660 [2024-10-05 08:52:57.076024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:20.660 [2024-10-05 08:52:57.076121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:20.660 [2024-10-05 08:52:57.076198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:20.660 [2024-10-05 08:52:57.076404] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:20.660 [2024-10-05 08:52:57.076455] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:20.660 [2024-10-05 08:52:57.076695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:20.660 [2024-10-05 08:52:57.083252] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:20.660 [2024-10-05 08:52:57.083321] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:20.660 [2024-10-05 08:52:57.083528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.660 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.920 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.920 "name": "raid_bdev1", 00:16:20.920 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:20.920 "strip_size_kb": 64, 00:16:20.920 "state": "online", 00:16:20.920 "raid_level": "raid5f", 00:16:20.920 "superblock": true, 00:16:20.920 "num_base_bdevs": 4, 00:16:20.920 "num_base_bdevs_discovered": 4, 00:16:20.920 "num_base_bdevs_operational": 4, 00:16:20.920 "base_bdevs_list": [ 00:16:20.920 { 00:16:20.920 "name": "pt1", 00:16:20.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:20.920 "is_configured": true, 00:16:20.920 "data_offset": 2048, 00:16:20.920 "data_size": 63488 00:16:20.920 }, 00:16:20.920 { 00:16:20.920 "name": "pt2", 00:16:20.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.920 "is_configured": true, 00:16:20.920 "data_offset": 2048, 00:16:20.920 "data_size": 63488 00:16:20.920 }, 00:16:20.920 { 00:16:20.920 "name": "pt3", 00:16:20.920 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:20.920 "is_configured": true, 00:16:20.920 "data_offset": 2048, 00:16:20.920 "data_size": 63488 00:16:20.920 }, 00:16:20.920 { 00:16:20.920 "name": "pt4", 00:16:20.920 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:20.920 "is_configured": true, 00:16:20.920 "data_offset": 2048, 00:16:20.920 "data_size": 63488 00:16:20.920 } 00:16:20.920 ] 00:16:20.920 }' 00:16:20.920 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.920 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.180 [2024-10-05 08:52:57.554490] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:21.180 "name": "raid_bdev1", 00:16:21.180 "aliases": [ 00:16:21.180 "5dc13c70-7597-4883-9ab3-dd1b710beb4f" 00:16:21.180 ], 00:16:21.180 "product_name": "Raid Volume", 00:16:21.180 "block_size": 512, 00:16:21.180 "num_blocks": 190464, 00:16:21.180 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:21.180 "assigned_rate_limits": { 00:16:21.180 "rw_ios_per_sec": 0, 00:16:21.180 "rw_mbytes_per_sec": 0, 00:16:21.180 "r_mbytes_per_sec": 0, 00:16:21.180 "w_mbytes_per_sec": 0 00:16:21.180 }, 00:16:21.180 "claimed": false, 00:16:21.180 "zoned": false, 00:16:21.180 "supported_io_types": { 00:16:21.180 "read": true, 00:16:21.180 "write": true, 00:16:21.180 "unmap": false, 00:16:21.180 "flush": false, 00:16:21.180 "reset": true, 00:16:21.180 "nvme_admin": false, 00:16:21.180 "nvme_io": false, 00:16:21.180 "nvme_io_md": false, 00:16:21.180 "write_zeroes": true, 00:16:21.180 "zcopy": false, 00:16:21.180 "get_zone_info": false, 00:16:21.180 "zone_management": false, 00:16:21.180 "zone_append": false, 00:16:21.180 "compare": false, 00:16:21.180 "compare_and_write": false, 00:16:21.180 "abort": false, 00:16:21.180 "seek_hole": false, 00:16:21.180 "seek_data": false, 00:16:21.180 "copy": false, 00:16:21.180 "nvme_iov_md": false 00:16:21.180 }, 00:16:21.180 "driver_specific": { 00:16:21.180 "raid": { 00:16:21.180 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:21.180 "strip_size_kb": 64, 00:16:21.180 "state": "online", 00:16:21.180 "raid_level": "raid5f", 00:16:21.180 "superblock": true, 00:16:21.180 "num_base_bdevs": 4, 00:16:21.180 "num_base_bdevs_discovered": 4, 00:16:21.180 "num_base_bdevs_operational": 4, 00:16:21.180 "base_bdevs_list": [ 00:16:21.180 { 00:16:21.180 "name": "pt1", 00:16:21.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.180 "is_configured": true, 00:16:21.180 "data_offset": 2048, 00:16:21.180 "data_size": 63488 00:16:21.180 }, 00:16:21.180 { 00:16:21.180 "name": "pt2", 00:16:21.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.180 "is_configured": true, 00:16:21.180 "data_offset": 2048, 00:16:21.180 "data_size": 63488 00:16:21.180 }, 00:16:21.180 { 00:16:21.180 "name": "pt3", 00:16:21.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.180 "is_configured": true, 00:16:21.180 "data_offset": 2048, 00:16:21.180 "data_size": 63488 00:16:21.180 }, 00:16:21.180 { 00:16:21.180 "name": "pt4", 00:16:21.180 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.180 "is_configured": true, 00:16:21.180 "data_offset": 2048, 00:16:21.180 "data_size": 63488 00:16:21.180 } 00:16:21.180 ] 00:16:21.180 } 00:16:21.180 } 00:16:21.180 }' 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:21.180 pt2 00:16:21.180 pt3 00:16:21.180 pt4' 00:16:21.180 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.440 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:21.441 [2024-10-05 08:52:57.861948] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5dc13c70-7597-4883-9ab3-dd1b710beb4f 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5dc13c70-7597-4883-9ab3-dd1b710beb4f ']' 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.441 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.701 [2024-10-05 08:52:57.913713] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.701 [2024-10-05 08:52:57.913782] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.701 [2024-10-05 08:52:57.913864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.701 [2024-10-05 08:52:57.913949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.701 [2024-10-05 08:52:57.914014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.701 08:52:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.701 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.701 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.701 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:21.701 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.701 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.701 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.701 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.702 [2024-10-05 08:52:58.077452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:21.702 [2024-10-05 08:52:58.079305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:21.702 [2024-10-05 08:52:58.079387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:21.702 [2024-10-05 08:52:58.079433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:21.702 [2024-10-05 08:52:58.079494] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:21.702 [2024-10-05 08:52:58.079560] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:21.702 [2024-10-05 08:52:58.079610] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:21.702 [2024-10-05 08:52:58.079630] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:21.702 [2024-10-05 08:52:58.079642] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.702 [2024-10-05 08:52:58.079654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:21.702 request: 00:16:21.702 { 00:16:21.702 "name": "raid_bdev1", 00:16:21.702 "raid_level": "raid5f", 00:16:21.702 "base_bdevs": [ 00:16:21.702 "malloc1", 00:16:21.702 "malloc2", 00:16:21.702 "malloc3", 00:16:21.702 "malloc4" 00:16:21.702 ], 00:16:21.702 "strip_size_kb": 64, 00:16:21.702 "superblock": false, 00:16:21.702 "method": "bdev_raid_create", 00:16:21.702 "req_id": 1 00:16:21.702 } 00:16:21.702 Got JSON-RPC error response 00:16:21.702 response: 00:16:21.702 { 00:16:21.702 "code": -17, 00:16:21.702 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:21.702 } 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.702 [2024-10-05 08:52:58.145316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:21.702 [2024-10-05 08:52:58.145407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.702 [2024-10-05 08:52:58.145437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:21.702 [2024-10-05 08:52:58.145463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.702 [2024-10-05 08:52:58.147492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.702 [2024-10-05 08:52:58.147566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:21.702 [2024-10-05 08:52:58.147646] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:21.702 [2024-10-05 08:52:58.147721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:21.702 pt1 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.702 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.995 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.995 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.995 "name": "raid_bdev1", 00:16:21.995 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:21.995 "strip_size_kb": 64, 00:16:21.995 "state": "configuring", 00:16:21.995 "raid_level": "raid5f", 00:16:21.995 "superblock": true, 00:16:21.995 "num_base_bdevs": 4, 00:16:21.995 "num_base_bdevs_discovered": 1, 00:16:21.995 "num_base_bdevs_operational": 4, 00:16:21.996 "base_bdevs_list": [ 00:16:21.996 { 00:16:21.996 "name": "pt1", 00:16:21.996 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.996 "is_configured": true, 00:16:21.996 "data_offset": 2048, 00:16:21.996 "data_size": 63488 00:16:21.996 }, 00:16:21.996 { 00:16:21.996 "name": null, 00:16:21.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.996 "is_configured": false, 00:16:21.996 "data_offset": 2048, 00:16:21.996 "data_size": 63488 00:16:21.996 }, 00:16:21.996 { 00:16:21.996 "name": null, 00:16:21.996 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.996 "is_configured": false, 00:16:21.996 "data_offset": 2048, 00:16:21.996 "data_size": 63488 00:16:21.996 }, 00:16:21.996 { 00:16:21.996 "name": null, 00:16:21.996 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.996 "is_configured": false, 00:16:21.996 "data_offset": 2048, 00:16:21.996 "data_size": 63488 00:16:21.996 } 00:16:21.996 ] 00:16:21.996 }' 00:16:21.996 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.996 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.271 [2024-10-05 08:52:58.600569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:22.271 [2024-10-05 08:52:58.600669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.271 [2024-10-05 08:52:58.600700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:22.271 [2024-10-05 08:52:58.600728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.271 [2024-10-05 08:52:58.601129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.271 [2024-10-05 08:52:58.601188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:22.271 [2024-10-05 08:52:58.601276] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:22.271 [2024-10-05 08:52:58.601325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.271 pt2 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.271 [2024-10-05 08:52:58.612562] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.271 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.271 "name": "raid_bdev1", 00:16:22.271 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:22.271 "strip_size_kb": 64, 00:16:22.271 "state": "configuring", 00:16:22.271 "raid_level": "raid5f", 00:16:22.271 "superblock": true, 00:16:22.271 "num_base_bdevs": 4, 00:16:22.271 "num_base_bdevs_discovered": 1, 00:16:22.271 "num_base_bdevs_operational": 4, 00:16:22.271 "base_bdevs_list": [ 00:16:22.271 { 00:16:22.271 "name": "pt1", 00:16:22.271 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.271 "is_configured": true, 00:16:22.271 "data_offset": 2048, 00:16:22.271 "data_size": 63488 00:16:22.271 }, 00:16:22.271 { 00:16:22.271 "name": null, 00:16:22.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.271 "is_configured": false, 00:16:22.271 "data_offset": 0, 00:16:22.271 "data_size": 63488 00:16:22.271 }, 00:16:22.271 { 00:16:22.272 "name": null, 00:16:22.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.272 "is_configured": false, 00:16:22.272 "data_offset": 2048, 00:16:22.272 "data_size": 63488 00:16:22.272 }, 00:16:22.272 { 00:16:22.272 "name": null, 00:16:22.272 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.272 "is_configured": false, 00:16:22.272 "data_offset": 2048, 00:16:22.272 "data_size": 63488 00:16:22.272 } 00:16:22.272 ] 00:16:22.272 }' 00:16:22.272 08:52:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.272 08:52:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.860 [2024-10-05 08:52:59.055876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:22.860 [2024-10-05 08:52:59.055976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.860 [2024-10-05 08:52:59.056009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:22.860 [2024-10-05 08:52:59.056035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.860 [2024-10-05 08:52:59.056388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.860 [2024-10-05 08:52:59.056440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:22.860 [2024-10-05 08:52:59.056524] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:22.860 [2024-10-05 08:52:59.056576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.860 pt2 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.860 [2024-10-05 08:52:59.067861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:22.860 [2024-10-05 08:52:59.067946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.860 [2024-10-05 08:52:59.067989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:22.860 [2024-10-05 08:52:59.068015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.860 [2024-10-05 08:52:59.068323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.860 [2024-10-05 08:52:59.068376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:22.860 [2024-10-05 08:52:59.068454] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:22.860 [2024-10-05 08:52:59.068495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:22.860 pt3 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.860 [2024-10-05 08:52:59.079814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:22.860 [2024-10-05 08:52:59.079899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.860 [2024-10-05 08:52:59.079931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:22.860 [2024-10-05 08:52:59.079964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.860 [2024-10-05 08:52:59.080311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.860 [2024-10-05 08:52:59.080366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:22.860 [2024-10-05 08:52:59.080448] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:22.860 [2024-10-05 08:52:59.080497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:22.860 [2024-10-05 08:52:59.080645] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:22.860 [2024-10-05 08:52:59.080681] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:22.860 [2024-10-05 08:52:59.080911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:22.860 [2024-10-05 08:52:59.087528] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:22.860 [2024-10-05 08:52:59.087584] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:22.860 [2024-10-05 08:52:59.087767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.860 pt4 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:22.860 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.861 "name": "raid_bdev1", 00:16:22.861 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:22.861 "strip_size_kb": 64, 00:16:22.861 "state": "online", 00:16:22.861 "raid_level": "raid5f", 00:16:22.861 "superblock": true, 00:16:22.861 "num_base_bdevs": 4, 00:16:22.861 "num_base_bdevs_discovered": 4, 00:16:22.861 "num_base_bdevs_operational": 4, 00:16:22.861 "base_bdevs_list": [ 00:16:22.861 { 00:16:22.861 "name": "pt1", 00:16:22.861 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.861 "is_configured": true, 00:16:22.861 "data_offset": 2048, 00:16:22.861 "data_size": 63488 00:16:22.861 }, 00:16:22.861 { 00:16:22.861 "name": "pt2", 00:16:22.861 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.861 "is_configured": true, 00:16:22.861 "data_offset": 2048, 00:16:22.861 "data_size": 63488 00:16:22.861 }, 00:16:22.861 { 00:16:22.861 "name": "pt3", 00:16:22.861 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.861 "is_configured": true, 00:16:22.861 "data_offset": 2048, 00:16:22.861 "data_size": 63488 00:16:22.861 }, 00:16:22.861 { 00:16:22.861 "name": "pt4", 00:16:22.861 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.861 "is_configured": true, 00:16:22.861 "data_offset": 2048, 00:16:22.861 "data_size": 63488 00:16:22.861 } 00:16:22.861 ] 00:16:22.861 }' 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.861 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.121 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:23.121 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:23.121 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:23.121 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:23.121 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:23.121 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:23.121 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:23.121 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.121 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.121 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:23.121 [2024-10-05 08:52:59.471300] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.121 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.121 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:23.121 "name": "raid_bdev1", 00:16:23.121 "aliases": [ 00:16:23.121 "5dc13c70-7597-4883-9ab3-dd1b710beb4f" 00:16:23.121 ], 00:16:23.121 "product_name": "Raid Volume", 00:16:23.121 "block_size": 512, 00:16:23.121 "num_blocks": 190464, 00:16:23.121 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:23.121 "assigned_rate_limits": { 00:16:23.121 "rw_ios_per_sec": 0, 00:16:23.121 "rw_mbytes_per_sec": 0, 00:16:23.121 "r_mbytes_per_sec": 0, 00:16:23.121 "w_mbytes_per_sec": 0 00:16:23.121 }, 00:16:23.121 "claimed": false, 00:16:23.121 "zoned": false, 00:16:23.121 "supported_io_types": { 00:16:23.121 "read": true, 00:16:23.121 "write": true, 00:16:23.121 "unmap": false, 00:16:23.121 "flush": false, 00:16:23.121 "reset": true, 00:16:23.121 "nvme_admin": false, 00:16:23.121 "nvme_io": false, 00:16:23.121 "nvme_io_md": false, 00:16:23.121 "write_zeroes": true, 00:16:23.121 "zcopy": false, 00:16:23.121 "get_zone_info": false, 00:16:23.121 "zone_management": false, 00:16:23.121 "zone_append": false, 00:16:23.121 "compare": false, 00:16:23.121 "compare_and_write": false, 00:16:23.121 "abort": false, 00:16:23.122 "seek_hole": false, 00:16:23.122 "seek_data": false, 00:16:23.122 "copy": false, 00:16:23.122 "nvme_iov_md": false 00:16:23.122 }, 00:16:23.122 "driver_specific": { 00:16:23.122 "raid": { 00:16:23.122 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:23.122 "strip_size_kb": 64, 00:16:23.122 "state": "online", 00:16:23.122 "raid_level": "raid5f", 00:16:23.122 "superblock": true, 00:16:23.122 "num_base_bdevs": 4, 00:16:23.122 "num_base_bdevs_discovered": 4, 00:16:23.122 "num_base_bdevs_operational": 4, 00:16:23.122 "base_bdevs_list": [ 00:16:23.122 { 00:16:23.122 "name": "pt1", 00:16:23.122 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.122 "is_configured": true, 00:16:23.122 "data_offset": 2048, 00:16:23.122 "data_size": 63488 00:16:23.122 }, 00:16:23.122 { 00:16:23.122 "name": "pt2", 00:16:23.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.122 "is_configured": true, 00:16:23.122 "data_offset": 2048, 00:16:23.122 "data_size": 63488 00:16:23.122 }, 00:16:23.122 { 00:16:23.122 "name": "pt3", 00:16:23.122 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:23.122 "is_configured": true, 00:16:23.122 "data_offset": 2048, 00:16:23.122 "data_size": 63488 00:16:23.122 }, 00:16:23.122 { 00:16:23.122 "name": "pt4", 00:16:23.122 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:23.122 "is_configured": true, 00:16:23.122 "data_offset": 2048, 00:16:23.122 "data_size": 63488 00:16:23.122 } 00:16:23.122 ] 00:16:23.122 } 00:16:23.122 } 00:16:23.122 }' 00:16:23.122 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:23.122 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:23.122 pt2 00:16:23.122 pt3 00:16:23.122 pt4' 00:16:23.122 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.382 [2024-10-05 08:52:59.758758] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5dc13c70-7597-4883-9ab3-dd1b710beb4f '!=' 5dc13c70-7597-4883-9ab3-dd1b710beb4f ']' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.382 [2024-10-05 08:52:59.802598] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.382 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.643 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.643 "name": "raid_bdev1", 00:16:23.643 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:23.643 "strip_size_kb": 64, 00:16:23.643 "state": "online", 00:16:23.643 "raid_level": "raid5f", 00:16:23.643 "superblock": true, 00:16:23.643 "num_base_bdevs": 4, 00:16:23.643 "num_base_bdevs_discovered": 3, 00:16:23.643 "num_base_bdevs_operational": 3, 00:16:23.643 "base_bdevs_list": [ 00:16:23.643 { 00:16:23.643 "name": null, 00:16:23.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.643 "is_configured": false, 00:16:23.643 "data_offset": 0, 00:16:23.643 "data_size": 63488 00:16:23.643 }, 00:16:23.643 { 00:16:23.643 "name": "pt2", 00:16:23.643 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.643 "is_configured": true, 00:16:23.643 "data_offset": 2048, 00:16:23.643 "data_size": 63488 00:16:23.643 }, 00:16:23.643 { 00:16:23.643 "name": "pt3", 00:16:23.643 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:23.643 "is_configured": true, 00:16:23.643 "data_offset": 2048, 00:16:23.643 "data_size": 63488 00:16:23.643 }, 00:16:23.643 { 00:16:23.643 "name": "pt4", 00:16:23.643 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:23.643 "is_configured": true, 00:16:23.643 "data_offset": 2048, 00:16:23.643 "data_size": 63488 00:16:23.643 } 00:16:23.643 ] 00:16:23.643 }' 00:16:23.643 08:52:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.643 08:52:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.904 [2024-10-05 08:53:00.257756] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.904 [2024-10-05 08:53:00.257786] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.904 [2024-10-05 08:53:00.257844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.904 [2024-10-05 08:53:00.257910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.904 [2024-10-05 08:53:00.257919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.904 [2024-10-05 08:53:00.357575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.904 [2024-10-05 08:53:00.357626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.904 [2024-10-05 08:53:00.357643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:23.904 [2024-10-05 08:53:00.357652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.904 [2024-10-05 08:53:00.359755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.904 [2024-10-05 08:53:00.359788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.904 [2024-10-05 08:53:00.359856] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:23.904 [2024-10-05 08:53:00.359895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.904 pt2 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.904 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.164 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.165 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.165 "name": "raid_bdev1", 00:16:24.165 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:24.165 "strip_size_kb": 64, 00:16:24.165 "state": "configuring", 00:16:24.165 "raid_level": "raid5f", 00:16:24.165 "superblock": true, 00:16:24.165 "num_base_bdevs": 4, 00:16:24.165 "num_base_bdevs_discovered": 1, 00:16:24.165 "num_base_bdevs_operational": 3, 00:16:24.165 "base_bdevs_list": [ 00:16:24.165 { 00:16:24.165 "name": null, 00:16:24.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.165 "is_configured": false, 00:16:24.165 "data_offset": 2048, 00:16:24.165 "data_size": 63488 00:16:24.165 }, 00:16:24.165 { 00:16:24.165 "name": "pt2", 00:16:24.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.165 "is_configured": true, 00:16:24.165 "data_offset": 2048, 00:16:24.165 "data_size": 63488 00:16:24.165 }, 00:16:24.165 { 00:16:24.165 "name": null, 00:16:24.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.165 "is_configured": false, 00:16:24.165 "data_offset": 2048, 00:16:24.165 "data_size": 63488 00:16:24.165 }, 00:16:24.165 { 00:16:24.165 "name": null, 00:16:24.165 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.165 "is_configured": false, 00:16:24.165 "data_offset": 2048, 00:16:24.165 "data_size": 63488 00:16:24.165 } 00:16:24.165 ] 00:16:24.165 }' 00:16:24.165 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.165 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.425 [2024-10-05 08:53:00.788987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:24.425 [2024-10-05 08:53:00.789032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.425 [2024-10-05 08:53:00.789048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:24.425 [2024-10-05 08:53:00.789056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.425 [2024-10-05 08:53:00.789437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.425 [2024-10-05 08:53:00.789464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:24.425 [2024-10-05 08:53:00.789531] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:24.425 [2024-10-05 08:53:00.789559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:24.425 pt3 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.425 "name": "raid_bdev1", 00:16:24.425 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:24.425 "strip_size_kb": 64, 00:16:24.425 "state": "configuring", 00:16:24.425 "raid_level": "raid5f", 00:16:24.425 "superblock": true, 00:16:24.425 "num_base_bdevs": 4, 00:16:24.425 "num_base_bdevs_discovered": 2, 00:16:24.425 "num_base_bdevs_operational": 3, 00:16:24.425 "base_bdevs_list": [ 00:16:24.425 { 00:16:24.425 "name": null, 00:16:24.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.425 "is_configured": false, 00:16:24.425 "data_offset": 2048, 00:16:24.425 "data_size": 63488 00:16:24.425 }, 00:16:24.425 { 00:16:24.425 "name": "pt2", 00:16:24.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.425 "is_configured": true, 00:16:24.425 "data_offset": 2048, 00:16:24.425 "data_size": 63488 00:16:24.425 }, 00:16:24.425 { 00:16:24.425 "name": "pt3", 00:16:24.425 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.425 "is_configured": true, 00:16:24.425 "data_offset": 2048, 00:16:24.425 "data_size": 63488 00:16:24.425 }, 00:16:24.425 { 00:16:24.425 "name": null, 00:16:24.425 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.425 "is_configured": false, 00:16:24.425 "data_offset": 2048, 00:16:24.425 "data_size": 63488 00:16:24.425 } 00:16:24.425 ] 00:16:24.425 }' 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.425 08:53:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.996 [2024-10-05 08:53:01.232200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:24.996 [2024-10-05 08:53:01.232247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.996 [2024-10-05 08:53:01.232263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:24.996 [2024-10-05 08:53:01.232272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.996 [2024-10-05 08:53:01.232619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.996 [2024-10-05 08:53:01.232642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:24.996 [2024-10-05 08:53:01.232698] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:24.996 [2024-10-05 08:53:01.232714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:24.996 [2024-10-05 08:53:01.232824] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:24.996 [2024-10-05 08:53:01.232839] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:24.996 [2024-10-05 08:53:01.233076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:24.996 [2024-10-05 08:53:01.240048] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:24.996 [2024-10-05 08:53:01.240071] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:24.996 [2024-10-05 08:53:01.240316] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.996 pt4 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.996 "name": "raid_bdev1", 00:16:24.996 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:24.996 "strip_size_kb": 64, 00:16:24.996 "state": "online", 00:16:24.996 "raid_level": "raid5f", 00:16:24.996 "superblock": true, 00:16:24.996 "num_base_bdevs": 4, 00:16:24.996 "num_base_bdevs_discovered": 3, 00:16:24.996 "num_base_bdevs_operational": 3, 00:16:24.996 "base_bdevs_list": [ 00:16:24.996 { 00:16:24.996 "name": null, 00:16:24.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.996 "is_configured": false, 00:16:24.996 "data_offset": 2048, 00:16:24.996 "data_size": 63488 00:16:24.996 }, 00:16:24.996 { 00:16:24.996 "name": "pt2", 00:16:24.996 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.996 "is_configured": true, 00:16:24.996 "data_offset": 2048, 00:16:24.996 "data_size": 63488 00:16:24.996 }, 00:16:24.996 { 00:16:24.996 "name": "pt3", 00:16:24.996 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.996 "is_configured": true, 00:16:24.996 "data_offset": 2048, 00:16:24.996 "data_size": 63488 00:16:24.996 }, 00:16:24.996 { 00:16:24.996 "name": "pt4", 00:16:24.996 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.996 "is_configured": true, 00:16:24.996 "data_offset": 2048, 00:16:24.996 "data_size": 63488 00:16:24.996 } 00:16:24.996 ] 00:16:24.996 }' 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.996 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.257 [2024-10-05 08:53:01.640003] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.257 [2024-10-05 08:53:01.640089] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.257 [2024-10-05 08:53:01.640174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.257 [2024-10-05 08:53:01.640256] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.257 [2024-10-05 08:53:01.640333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.257 [2024-10-05 08:53:01.715873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:25.257 [2024-10-05 08:53:01.715995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.257 [2024-10-05 08:53:01.716031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:25.257 [2024-10-05 08:53:01.716061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.257 [2024-10-05 08:53:01.718344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.257 [2024-10-05 08:53:01.718421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:25.257 [2024-10-05 08:53:01.718506] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:25.257 [2024-10-05 08:53:01.718581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:25.257 [2024-10-05 08:53:01.718732] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:25.257 [2024-10-05 08:53:01.718792] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.257 [2024-10-05 08:53:01.718826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:25.257 [2024-10-05 08:53:01.718913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:25.257 [2024-10-05 08:53:01.719051] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:25.257 pt1 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.257 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.516 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.516 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.516 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.516 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.516 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.516 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.516 "name": "raid_bdev1", 00:16:25.516 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:25.516 "strip_size_kb": 64, 00:16:25.516 "state": "configuring", 00:16:25.516 "raid_level": "raid5f", 00:16:25.516 "superblock": true, 00:16:25.516 "num_base_bdevs": 4, 00:16:25.516 "num_base_bdevs_discovered": 2, 00:16:25.516 "num_base_bdevs_operational": 3, 00:16:25.516 "base_bdevs_list": [ 00:16:25.516 { 00:16:25.516 "name": null, 00:16:25.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.516 "is_configured": false, 00:16:25.516 "data_offset": 2048, 00:16:25.516 "data_size": 63488 00:16:25.516 }, 00:16:25.516 { 00:16:25.516 "name": "pt2", 00:16:25.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.516 "is_configured": true, 00:16:25.516 "data_offset": 2048, 00:16:25.516 "data_size": 63488 00:16:25.516 }, 00:16:25.516 { 00:16:25.516 "name": "pt3", 00:16:25.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:25.516 "is_configured": true, 00:16:25.516 "data_offset": 2048, 00:16:25.516 "data_size": 63488 00:16:25.516 }, 00:16:25.516 { 00:16:25.516 "name": null, 00:16:25.516 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:25.516 "is_configured": false, 00:16:25.516 "data_offset": 2048, 00:16:25.516 "data_size": 63488 00:16:25.516 } 00:16:25.516 ] 00:16:25.516 }' 00:16:25.516 08:53:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.516 08:53:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.776 [2024-10-05 08:53:02.223015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:25.776 [2024-10-05 08:53:02.223107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.776 [2024-10-05 08:53:02.223142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:25.776 [2024-10-05 08:53:02.223170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.776 [2024-10-05 08:53:02.223533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.776 [2024-10-05 08:53:02.223589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:25.776 [2024-10-05 08:53:02.223674] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:25.776 [2024-10-05 08:53:02.223719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:25.776 [2024-10-05 08:53:02.223851] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:25.776 [2024-10-05 08:53:02.223888] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:25.776 [2024-10-05 08:53:02.224159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:25.776 [2024-10-05 08:53:02.231239] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:25.776 [2024-10-05 08:53:02.231296] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:25.776 [2024-10-05 08:53:02.231543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.776 pt4 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.776 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.037 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.037 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.037 "name": "raid_bdev1", 00:16:26.037 "uuid": "5dc13c70-7597-4883-9ab3-dd1b710beb4f", 00:16:26.037 "strip_size_kb": 64, 00:16:26.037 "state": "online", 00:16:26.037 "raid_level": "raid5f", 00:16:26.037 "superblock": true, 00:16:26.037 "num_base_bdevs": 4, 00:16:26.037 "num_base_bdevs_discovered": 3, 00:16:26.037 "num_base_bdevs_operational": 3, 00:16:26.037 "base_bdevs_list": [ 00:16:26.037 { 00:16:26.037 "name": null, 00:16:26.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.037 "is_configured": false, 00:16:26.037 "data_offset": 2048, 00:16:26.037 "data_size": 63488 00:16:26.037 }, 00:16:26.037 { 00:16:26.037 "name": "pt2", 00:16:26.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.037 "is_configured": true, 00:16:26.037 "data_offset": 2048, 00:16:26.037 "data_size": 63488 00:16:26.037 }, 00:16:26.037 { 00:16:26.037 "name": "pt3", 00:16:26.037 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:26.037 "is_configured": true, 00:16:26.037 "data_offset": 2048, 00:16:26.037 "data_size": 63488 00:16:26.037 }, 00:16:26.037 { 00:16:26.037 "name": "pt4", 00:16:26.037 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:26.037 "is_configured": true, 00:16:26.037 "data_offset": 2048, 00:16:26.037 "data_size": 63488 00:16:26.037 } 00:16:26.037 ] 00:16:26.037 }' 00:16:26.037 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.037 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.297 [2024-10-05 08:53:02.707069] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5dc13c70-7597-4883-9ab3-dd1b710beb4f '!=' 5dc13c70-7597-4883-9ab3-dd1b710beb4f ']' 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80844 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 80844 ']' 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 80844 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:26.297 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80844 00:16:26.557 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:26.557 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:26.557 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80844' 00:16:26.557 killing process with pid 80844 00:16:26.557 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 80844 00:16:26.557 [2024-10-05 08:53:02.789413] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:26.557 [2024-10-05 08:53:02.789492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.557 [2024-10-05 08:53:02.789564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.557 [2024-10-05 08:53:02.789576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:26.557 08:53:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 80844 00:16:26.817 [2024-10-05 08:53:03.160869] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.202 08:53:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:28.202 00:16:28.202 real 0m8.416s 00:16:28.202 user 0m13.046s 00:16:28.202 sys 0m1.666s 00:16:28.202 08:53:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.202 ************************************ 00:16:28.202 END TEST raid5f_superblock_test 00:16:28.202 ************************************ 00:16:28.202 08:53:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.202 08:53:04 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:28.202 08:53:04 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:28.202 08:53:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:28.202 08:53:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.202 08:53:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.202 ************************************ 00:16:28.202 START TEST raid5f_rebuild_test 00:16:28.202 ************************************ 00:16:28.202 08:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:16:28.202 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81277 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81277 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 81277 ']' 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:28.203 08:53:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.203 [2024-10-05 08:53:04.527981] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:16:28.203 [2024-10-05 08:53:04.528138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:28.203 Zero copy mechanism will not be used. 00:16:28.203 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81277 ] 00:16:28.463 [2024-10-05 08:53:04.689850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.463 [2024-10-05 08:53:04.888546] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.723 [2024-10-05 08:53:05.081122] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.723 [2024-10-05 08:53:05.081255] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.983 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.984 BaseBdev1_malloc 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.984 [2024-10-05 08:53:05.389726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:28.984 [2024-10-05 08:53:05.389884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.984 [2024-10-05 08:53:05.389924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:28.984 [2024-10-05 08:53:05.389966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.984 [2024-10-05 08:53:05.391901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.984 [2024-10-05 08:53:05.391984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:28.984 BaseBdev1 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.984 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.244 BaseBdev2_malloc 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.244 [2024-10-05 08:53:05.473113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:29.244 [2024-10-05 08:53:05.473231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.244 [2024-10-05 08:53:05.473266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:29.244 [2024-10-05 08:53:05.473299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.244 [2024-10-05 08:53:05.475254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.244 [2024-10-05 08:53:05.475329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:29.244 BaseBdev2 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.244 BaseBdev3_malloc 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.244 [2024-10-05 08:53:05.526993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:29.244 [2024-10-05 08:53:05.527096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.244 [2024-10-05 08:53:05.527130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:29.244 [2024-10-05 08:53:05.527159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.244 [2024-10-05 08:53:05.529036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.244 [2024-10-05 08:53:05.529116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:29.244 BaseBdev3 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.244 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.245 BaseBdev4_malloc 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.245 [2024-10-05 08:53:05.580883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:29.245 [2024-10-05 08:53:05.580935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.245 [2024-10-05 08:53:05.580953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:29.245 [2024-10-05 08:53:05.580971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.245 [2024-10-05 08:53:05.582828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.245 [2024-10-05 08:53:05.582918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:29.245 BaseBdev4 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.245 spare_malloc 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.245 spare_delay 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.245 [2024-10-05 08:53:05.645914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:29.245 [2024-10-05 08:53:05.646036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.245 [2024-10-05 08:53:05.646058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:29.245 [2024-10-05 08:53:05.646068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.245 [2024-10-05 08:53:05.647940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.245 [2024-10-05 08:53:05.647988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:29.245 spare 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.245 [2024-10-05 08:53:05.657946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.245 [2024-10-05 08:53:05.659628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.245 [2024-10-05 08:53:05.659729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.245 [2024-10-05 08:53:05.659797] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:29.245 [2024-10-05 08:53:05.659905] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:29.245 [2024-10-05 08:53:05.659943] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:29.245 [2024-10-05 08:53:05.660200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:29.245 [2024-10-05 08:53:05.666351] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:29.245 [2024-10-05 08:53:05.666404] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:29.245 [2024-10-05 08:53:05.666613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.245 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.505 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.505 "name": "raid_bdev1", 00:16:29.505 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:29.505 "strip_size_kb": 64, 00:16:29.505 "state": "online", 00:16:29.505 "raid_level": "raid5f", 00:16:29.505 "superblock": false, 00:16:29.505 "num_base_bdevs": 4, 00:16:29.505 "num_base_bdevs_discovered": 4, 00:16:29.505 "num_base_bdevs_operational": 4, 00:16:29.505 "base_bdevs_list": [ 00:16:29.505 { 00:16:29.505 "name": "BaseBdev1", 00:16:29.505 "uuid": "30159716-430b-58d0-bf5f-2d441ad31232", 00:16:29.505 "is_configured": true, 00:16:29.505 "data_offset": 0, 00:16:29.505 "data_size": 65536 00:16:29.505 }, 00:16:29.505 { 00:16:29.506 "name": "BaseBdev2", 00:16:29.506 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:29.506 "is_configured": true, 00:16:29.506 "data_offset": 0, 00:16:29.506 "data_size": 65536 00:16:29.506 }, 00:16:29.506 { 00:16:29.506 "name": "BaseBdev3", 00:16:29.506 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:29.506 "is_configured": true, 00:16:29.506 "data_offset": 0, 00:16:29.506 "data_size": 65536 00:16:29.506 }, 00:16:29.506 { 00:16:29.506 "name": "BaseBdev4", 00:16:29.506 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:29.506 "is_configured": true, 00:16:29.506 "data_offset": 0, 00:16:29.506 "data_size": 65536 00:16:29.506 } 00:16:29.506 ] 00:16:29.506 }' 00:16:29.506 08:53:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.506 08:53:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:29.765 [2024-10-05 08:53:06.125511] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:29.765 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:29.766 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:29.766 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:29.766 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:30.026 [2024-10-05 08:53:06.385104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:30.026 /dev/nbd0 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:30.026 1+0 records in 00:16:30.026 1+0 records out 00:16:30.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292343 s, 14.0 MB/s 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:30.026 08:53:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:30.596 512+0 records in 00:16:30.597 512+0 records out 00:16:30.597 100663296 bytes (101 MB, 96 MiB) copied, 0.5684 s, 177 MB/s 00:16:30.597 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:30.597 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.597 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:30.597 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:30.597 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:30.597 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.597 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:30.857 [2024-10-05 08:53:07.257069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.857 [2024-10-05 08:53:07.285704] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.857 08:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.117 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.117 "name": "raid_bdev1", 00:16:31.117 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:31.117 "strip_size_kb": 64, 00:16:31.117 "state": "online", 00:16:31.117 "raid_level": "raid5f", 00:16:31.117 "superblock": false, 00:16:31.117 "num_base_bdevs": 4, 00:16:31.117 "num_base_bdevs_discovered": 3, 00:16:31.117 "num_base_bdevs_operational": 3, 00:16:31.117 "base_bdevs_list": [ 00:16:31.117 { 00:16:31.117 "name": null, 00:16:31.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.117 "is_configured": false, 00:16:31.117 "data_offset": 0, 00:16:31.117 "data_size": 65536 00:16:31.117 }, 00:16:31.117 { 00:16:31.117 "name": "BaseBdev2", 00:16:31.117 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:31.117 "is_configured": true, 00:16:31.117 "data_offset": 0, 00:16:31.117 "data_size": 65536 00:16:31.117 }, 00:16:31.117 { 00:16:31.117 "name": "BaseBdev3", 00:16:31.117 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:31.117 "is_configured": true, 00:16:31.117 "data_offset": 0, 00:16:31.117 "data_size": 65536 00:16:31.117 }, 00:16:31.117 { 00:16:31.117 "name": "BaseBdev4", 00:16:31.117 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:31.117 "is_configured": true, 00:16:31.117 "data_offset": 0, 00:16:31.117 "data_size": 65536 00:16:31.117 } 00:16:31.117 ] 00:16:31.117 }' 00:16:31.117 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.117 08:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.376 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:31.376 08:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.376 08:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.376 [2024-10-05 08:53:07.741027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:31.376 [2024-10-05 08:53:07.754686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:31.376 08:53:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.376 08:53:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:31.376 [2024-10-05 08:53:07.763486] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:32.319 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.319 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.319 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.319 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.319 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.319 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.319 08:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.319 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.319 08:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.319 08:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.578 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.578 "name": "raid_bdev1", 00:16:32.578 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:32.578 "strip_size_kb": 64, 00:16:32.578 "state": "online", 00:16:32.578 "raid_level": "raid5f", 00:16:32.578 "superblock": false, 00:16:32.578 "num_base_bdevs": 4, 00:16:32.578 "num_base_bdevs_discovered": 4, 00:16:32.578 "num_base_bdevs_operational": 4, 00:16:32.578 "process": { 00:16:32.578 "type": "rebuild", 00:16:32.578 "target": "spare", 00:16:32.578 "progress": { 00:16:32.578 "blocks": 19200, 00:16:32.578 "percent": 9 00:16:32.578 } 00:16:32.578 }, 00:16:32.578 "base_bdevs_list": [ 00:16:32.578 { 00:16:32.578 "name": "spare", 00:16:32.578 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:32.578 "is_configured": true, 00:16:32.578 "data_offset": 0, 00:16:32.578 "data_size": 65536 00:16:32.578 }, 00:16:32.578 { 00:16:32.578 "name": "BaseBdev2", 00:16:32.578 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:32.578 "is_configured": true, 00:16:32.578 "data_offset": 0, 00:16:32.578 "data_size": 65536 00:16:32.578 }, 00:16:32.578 { 00:16:32.578 "name": "BaseBdev3", 00:16:32.578 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:32.578 "is_configured": true, 00:16:32.578 "data_offset": 0, 00:16:32.578 "data_size": 65536 00:16:32.579 }, 00:16:32.579 { 00:16:32.579 "name": "BaseBdev4", 00:16:32.579 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:32.579 "is_configured": true, 00:16:32.579 "data_offset": 0, 00:16:32.579 "data_size": 65536 00:16:32.579 } 00:16:32.579 ] 00:16:32.579 }' 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.579 [2024-10-05 08:53:08.906212] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:32.579 [2024-10-05 08:53:08.969086] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:32.579 [2024-10-05 08:53:08.969214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.579 [2024-10-05 08:53:08.969254] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:32.579 [2024-10-05 08:53:08.969278] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.579 08:53:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.579 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.579 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.579 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.579 08:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.579 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.579 08:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.579 08:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.838 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.838 "name": "raid_bdev1", 00:16:32.838 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:32.838 "strip_size_kb": 64, 00:16:32.838 "state": "online", 00:16:32.838 "raid_level": "raid5f", 00:16:32.838 "superblock": false, 00:16:32.838 "num_base_bdevs": 4, 00:16:32.838 "num_base_bdevs_discovered": 3, 00:16:32.838 "num_base_bdevs_operational": 3, 00:16:32.838 "base_bdevs_list": [ 00:16:32.838 { 00:16:32.838 "name": null, 00:16:32.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.838 "is_configured": false, 00:16:32.838 "data_offset": 0, 00:16:32.838 "data_size": 65536 00:16:32.838 }, 00:16:32.838 { 00:16:32.838 "name": "BaseBdev2", 00:16:32.838 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:32.838 "is_configured": true, 00:16:32.838 "data_offset": 0, 00:16:32.838 "data_size": 65536 00:16:32.838 }, 00:16:32.838 { 00:16:32.838 "name": "BaseBdev3", 00:16:32.838 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:32.838 "is_configured": true, 00:16:32.838 "data_offset": 0, 00:16:32.838 "data_size": 65536 00:16:32.838 }, 00:16:32.838 { 00:16:32.838 "name": "BaseBdev4", 00:16:32.838 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:32.838 "is_configured": true, 00:16:32.838 "data_offset": 0, 00:16:32.838 "data_size": 65536 00:16:32.838 } 00:16:32.838 ] 00:16:32.838 }' 00:16:32.838 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.838 08:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.097 "name": "raid_bdev1", 00:16:33.097 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:33.097 "strip_size_kb": 64, 00:16:33.097 "state": "online", 00:16:33.097 "raid_level": "raid5f", 00:16:33.097 "superblock": false, 00:16:33.097 "num_base_bdevs": 4, 00:16:33.097 "num_base_bdevs_discovered": 3, 00:16:33.097 "num_base_bdevs_operational": 3, 00:16:33.097 "base_bdevs_list": [ 00:16:33.097 { 00:16:33.097 "name": null, 00:16:33.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.097 "is_configured": false, 00:16:33.097 "data_offset": 0, 00:16:33.097 "data_size": 65536 00:16:33.097 }, 00:16:33.097 { 00:16:33.097 "name": "BaseBdev2", 00:16:33.097 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:33.097 "is_configured": true, 00:16:33.097 "data_offset": 0, 00:16:33.097 "data_size": 65536 00:16:33.097 }, 00:16:33.097 { 00:16:33.097 "name": "BaseBdev3", 00:16:33.097 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:33.097 "is_configured": true, 00:16:33.097 "data_offset": 0, 00:16:33.097 "data_size": 65536 00:16:33.097 }, 00:16:33.097 { 00:16:33.097 "name": "BaseBdev4", 00:16:33.097 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:33.097 "is_configured": true, 00:16:33.097 "data_offset": 0, 00:16:33.097 "data_size": 65536 00:16:33.097 } 00:16:33.097 ] 00:16:33.097 }' 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.097 [2024-10-05 08:53:09.539806] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.097 [2024-10-05 08:53:09.553185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.097 08:53:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:33.097 [2024-10-05 08:53:09.561992] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.474 "name": "raid_bdev1", 00:16:34.474 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:34.474 "strip_size_kb": 64, 00:16:34.474 "state": "online", 00:16:34.474 "raid_level": "raid5f", 00:16:34.474 "superblock": false, 00:16:34.474 "num_base_bdevs": 4, 00:16:34.474 "num_base_bdevs_discovered": 4, 00:16:34.474 "num_base_bdevs_operational": 4, 00:16:34.474 "process": { 00:16:34.474 "type": "rebuild", 00:16:34.474 "target": "spare", 00:16:34.474 "progress": { 00:16:34.474 "blocks": 19200, 00:16:34.474 "percent": 9 00:16:34.474 } 00:16:34.474 }, 00:16:34.474 "base_bdevs_list": [ 00:16:34.474 { 00:16:34.474 "name": "spare", 00:16:34.474 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:34.474 "is_configured": true, 00:16:34.474 "data_offset": 0, 00:16:34.474 "data_size": 65536 00:16:34.474 }, 00:16:34.474 { 00:16:34.474 "name": "BaseBdev2", 00:16:34.474 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:34.474 "is_configured": true, 00:16:34.474 "data_offset": 0, 00:16:34.474 "data_size": 65536 00:16:34.474 }, 00:16:34.474 { 00:16:34.474 "name": "BaseBdev3", 00:16:34.474 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:34.474 "is_configured": true, 00:16:34.474 "data_offset": 0, 00:16:34.474 "data_size": 65536 00:16:34.474 }, 00:16:34.474 { 00:16:34.474 "name": "BaseBdev4", 00:16:34.474 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:34.474 "is_configured": true, 00:16:34.474 "data_offset": 0, 00:16:34.474 "data_size": 65536 00:16:34.474 } 00:16:34.474 ] 00:16:34.474 }' 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=622 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.474 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.475 08:53:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.475 08:53:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.475 08:53:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.475 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.475 "name": "raid_bdev1", 00:16:34.475 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:34.475 "strip_size_kb": 64, 00:16:34.475 "state": "online", 00:16:34.475 "raid_level": "raid5f", 00:16:34.475 "superblock": false, 00:16:34.475 "num_base_bdevs": 4, 00:16:34.475 "num_base_bdevs_discovered": 4, 00:16:34.475 "num_base_bdevs_operational": 4, 00:16:34.475 "process": { 00:16:34.475 "type": "rebuild", 00:16:34.475 "target": "spare", 00:16:34.475 "progress": { 00:16:34.475 "blocks": 21120, 00:16:34.475 "percent": 10 00:16:34.475 } 00:16:34.475 }, 00:16:34.475 "base_bdevs_list": [ 00:16:34.475 { 00:16:34.475 "name": "spare", 00:16:34.475 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:34.475 "is_configured": true, 00:16:34.475 "data_offset": 0, 00:16:34.475 "data_size": 65536 00:16:34.475 }, 00:16:34.475 { 00:16:34.475 "name": "BaseBdev2", 00:16:34.475 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:34.475 "is_configured": true, 00:16:34.475 "data_offset": 0, 00:16:34.475 "data_size": 65536 00:16:34.475 }, 00:16:34.475 { 00:16:34.475 "name": "BaseBdev3", 00:16:34.475 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:34.475 "is_configured": true, 00:16:34.475 "data_offset": 0, 00:16:34.475 "data_size": 65536 00:16:34.475 }, 00:16:34.475 { 00:16:34.475 "name": "BaseBdev4", 00:16:34.475 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:34.475 "is_configured": true, 00:16:34.475 "data_offset": 0, 00:16:34.475 "data_size": 65536 00:16:34.475 } 00:16:34.475 ] 00:16:34.475 }' 00:16:34.475 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.475 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.475 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.475 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.475 08:53:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.414 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.414 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.414 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.414 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.414 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.414 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.414 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.414 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.414 08:53:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.414 08:53:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.414 08:53:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.673 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.673 "name": "raid_bdev1", 00:16:35.673 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:35.673 "strip_size_kb": 64, 00:16:35.673 "state": "online", 00:16:35.673 "raid_level": "raid5f", 00:16:35.673 "superblock": false, 00:16:35.673 "num_base_bdevs": 4, 00:16:35.673 "num_base_bdevs_discovered": 4, 00:16:35.673 "num_base_bdevs_operational": 4, 00:16:35.673 "process": { 00:16:35.673 "type": "rebuild", 00:16:35.673 "target": "spare", 00:16:35.673 "progress": { 00:16:35.673 "blocks": 42240, 00:16:35.673 "percent": 21 00:16:35.673 } 00:16:35.673 }, 00:16:35.673 "base_bdevs_list": [ 00:16:35.673 { 00:16:35.673 "name": "spare", 00:16:35.673 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:35.673 "is_configured": true, 00:16:35.673 "data_offset": 0, 00:16:35.673 "data_size": 65536 00:16:35.673 }, 00:16:35.673 { 00:16:35.673 "name": "BaseBdev2", 00:16:35.673 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:35.673 "is_configured": true, 00:16:35.673 "data_offset": 0, 00:16:35.673 "data_size": 65536 00:16:35.673 }, 00:16:35.673 { 00:16:35.673 "name": "BaseBdev3", 00:16:35.673 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:35.673 "is_configured": true, 00:16:35.673 "data_offset": 0, 00:16:35.673 "data_size": 65536 00:16:35.673 }, 00:16:35.673 { 00:16:35.673 "name": "BaseBdev4", 00:16:35.673 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:35.673 "is_configured": true, 00:16:35.673 "data_offset": 0, 00:16:35.673 "data_size": 65536 00:16:35.673 } 00:16:35.674 ] 00:16:35.674 }' 00:16:35.674 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.674 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.674 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.674 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.674 08:53:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.615 "name": "raid_bdev1", 00:16:36.615 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:36.615 "strip_size_kb": 64, 00:16:36.615 "state": "online", 00:16:36.615 "raid_level": "raid5f", 00:16:36.615 "superblock": false, 00:16:36.615 "num_base_bdevs": 4, 00:16:36.615 "num_base_bdevs_discovered": 4, 00:16:36.615 "num_base_bdevs_operational": 4, 00:16:36.615 "process": { 00:16:36.615 "type": "rebuild", 00:16:36.615 "target": "spare", 00:16:36.615 "progress": { 00:16:36.615 "blocks": 65280, 00:16:36.615 "percent": 33 00:16:36.615 } 00:16:36.615 }, 00:16:36.615 "base_bdevs_list": [ 00:16:36.615 { 00:16:36.615 "name": "spare", 00:16:36.615 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:36.615 "is_configured": true, 00:16:36.615 "data_offset": 0, 00:16:36.615 "data_size": 65536 00:16:36.615 }, 00:16:36.615 { 00:16:36.615 "name": "BaseBdev2", 00:16:36.615 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:36.615 "is_configured": true, 00:16:36.615 "data_offset": 0, 00:16:36.615 "data_size": 65536 00:16:36.615 }, 00:16:36.615 { 00:16:36.615 "name": "BaseBdev3", 00:16:36.615 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:36.615 "is_configured": true, 00:16:36.615 "data_offset": 0, 00:16:36.615 "data_size": 65536 00:16:36.615 }, 00:16:36.615 { 00:16:36.615 "name": "BaseBdev4", 00:16:36.615 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:36.615 "is_configured": true, 00:16:36.615 "data_offset": 0, 00:16:36.615 "data_size": 65536 00:16:36.615 } 00:16:36.615 ] 00:16:36.615 }' 00:16:36.615 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.877 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.877 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.877 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.877 08:53:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.845 "name": "raid_bdev1", 00:16:37.845 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:37.845 "strip_size_kb": 64, 00:16:37.845 "state": "online", 00:16:37.845 "raid_level": "raid5f", 00:16:37.845 "superblock": false, 00:16:37.845 "num_base_bdevs": 4, 00:16:37.845 "num_base_bdevs_discovered": 4, 00:16:37.845 "num_base_bdevs_operational": 4, 00:16:37.845 "process": { 00:16:37.845 "type": "rebuild", 00:16:37.845 "target": "spare", 00:16:37.845 "progress": { 00:16:37.845 "blocks": 86400, 00:16:37.845 "percent": 43 00:16:37.845 } 00:16:37.845 }, 00:16:37.845 "base_bdevs_list": [ 00:16:37.845 { 00:16:37.845 "name": "spare", 00:16:37.845 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:37.845 "is_configured": true, 00:16:37.845 "data_offset": 0, 00:16:37.845 "data_size": 65536 00:16:37.845 }, 00:16:37.845 { 00:16:37.845 "name": "BaseBdev2", 00:16:37.845 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:37.845 "is_configured": true, 00:16:37.845 "data_offset": 0, 00:16:37.845 "data_size": 65536 00:16:37.845 }, 00:16:37.845 { 00:16:37.845 "name": "BaseBdev3", 00:16:37.845 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:37.845 "is_configured": true, 00:16:37.845 "data_offset": 0, 00:16:37.845 "data_size": 65536 00:16:37.845 }, 00:16:37.845 { 00:16:37.845 "name": "BaseBdev4", 00:16:37.845 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:37.845 "is_configured": true, 00:16:37.845 "data_offset": 0, 00:16:37.845 "data_size": 65536 00:16:37.845 } 00:16:37.845 ] 00:16:37.845 }' 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.845 08:53:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.227 "name": "raid_bdev1", 00:16:39.227 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:39.227 "strip_size_kb": 64, 00:16:39.227 "state": "online", 00:16:39.227 "raid_level": "raid5f", 00:16:39.227 "superblock": false, 00:16:39.227 "num_base_bdevs": 4, 00:16:39.227 "num_base_bdevs_discovered": 4, 00:16:39.227 "num_base_bdevs_operational": 4, 00:16:39.227 "process": { 00:16:39.227 "type": "rebuild", 00:16:39.227 "target": "spare", 00:16:39.227 "progress": { 00:16:39.227 "blocks": 109440, 00:16:39.227 "percent": 55 00:16:39.227 } 00:16:39.227 }, 00:16:39.227 "base_bdevs_list": [ 00:16:39.227 { 00:16:39.227 "name": "spare", 00:16:39.227 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:39.227 "is_configured": true, 00:16:39.227 "data_offset": 0, 00:16:39.227 "data_size": 65536 00:16:39.227 }, 00:16:39.227 { 00:16:39.227 "name": "BaseBdev2", 00:16:39.227 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:39.227 "is_configured": true, 00:16:39.227 "data_offset": 0, 00:16:39.227 "data_size": 65536 00:16:39.227 }, 00:16:39.227 { 00:16:39.227 "name": "BaseBdev3", 00:16:39.227 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:39.227 "is_configured": true, 00:16:39.227 "data_offset": 0, 00:16:39.227 "data_size": 65536 00:16:39.227 }, 00:16:39.227 { 00:16:39.227 "name": "BaseBdev4", 00:16:39.227 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:39.227 "is_configured": true, 00:16:39.227 "data_offset": 0, 00:16:39.227 "data_size": 65536 00:16:39.227 } 00:16:39.227 ] 00:16:39.227 }' 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.227 08:53:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.166 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.166 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.166 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.166 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.166 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.167 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.167 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.167 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.167 08:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.167 08:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.167 08:53:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.167 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.167 "name": "raid_bdev1", 00:16:40.167 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:40.167 "strip_size_kb": 64, 00:16:40.167 "state": "online", 00:16:40.167 "raid_level": "raid5f", 00:16:40.167 "superblock": false, 00:16:40.167 "num_base_bdevs": 4, 00:16:40.167 "num_base_bdevs_discovered": 4, 00:16:40.167 "num_base_bdevs_operational": 4, 00:16:40.167 "process": { 00:16:40.167 "type": "rebuild", 00:16:40.167 "target": "spare", 00:16:40.167 "progress": { 00:16:40.167 "blocks": 130560, 00:16:40.167 "percent": 66 00:16:40.167 } 00:16:40.167 }, 00:16:40.167 "base_bdevs_list": [ 00:16:40.167 { 00:16:40.167 "name": "spare", 00:16:40.167 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:40.167 "is_configured": true, 00:16:40.167 "data_offset": 0, 00:16:40.167 "data_size": 65536 00:16:40.167 }, 00:16:40.167 { 00:16:40.167 "name": "BaseBdev2", 00:16:40.167 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:40.167 "is_configured": true, 00:16:40.167 "data_offset": 0, 00:16:40.167 "data_size": 65536 00:16:40.167 }, 00:16:40.167 { 00:16:40.167 "name": "BaseBdev3", 00:16:40.167 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:40.167 "is_configured": true, 00:16:40.167 "data_offset": 0, 00:16:40.167 "data_size": 65536 00:16:40.167 }, 00:16:40.167 { 00:16:40.167 "name": "BaseBdev4", 00:16:40.167 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:40.167 "is_configured": true, 00:16:40.167 "data_offset": 0, 00:16:40.167 "data_size": 65536 00:16:40.167 } 00:16:40.167 ] 00:16:40.167 }' 00:16:40.167 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.167 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.167 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.167 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.167 08:53:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.549 "name": "raid_bdev1", 00:16:41.549 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:41.549 "strip_size_kb": 64, 00:16:41.549 "state": "online", 00:16:41.549 "raid_level": "raid5f", 00:16:41.549 "superblock": false, 00:16:41.549 "num_base_bdevs": 4, 00:16:41.549 "num_base_bdevs_discovered": 4, 00:16:41.549 "num_base_bdevs_operational": 4, 00:16:41.549 "process": { 00:16:41.549 "type": "rebuild", 00:16:41.549 "target": "spare", 00:16:41.549 "progress": { 00:16:41.549 "blocks": 151680, 00:16:41.549 "percent": 77 00:16:41.549 } 00:16:41.549 }, 00:16:41.549 "base_bdevs_list": [ 00:16:41.549 { 00:16:41.549 "name": "spare", 00:16:41.549 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:41.549 "is_configured": true, 00:16:41.549 "data_offset": 0, 00:16:41.549 "data_size": 65536 00:16:41.549 }, 00:16:41.549 { 00:16:41.549 "name": "BaseBdev2", 00:16:41.549 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:41.549 "is_configured": true, 00:16:41.549 "data_offset": 0, 00:16:41.549 "data_size": 65536 00:16:41.549 }, 00:16:41.549 { 00:16:41.549 "name": "BaseBdev3", 00:16:41.549 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:41.549 "is_configured": true, 00:16:41.549 "data_offset": 0, 00:16:41.549 "data_size": 65536 00:16:41.549 }, 00:16:41.549 { 00:16:41.549 "name": "BaseBdev4", 00:16:41.549 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:41.549 "is_configured": true, 00:16:41.549 "data_offset": 0, 00:16:41.549 "data_size": 65536 00:16:41.549 } 00:16:41.549 ] 00:16:41.549 }' 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.549 08:53:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:42.486 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.486 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.486 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.486 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.486 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.486 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.486 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.486 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.486 08:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.486 08:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.486 08:53:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.486 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.486 "name": "raid_bdev1", 00:16:42.486 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:42.486 "strip_size_kb": 64, 00:16:42.486 "state": "online", 00:16:42.486 "raid_level": "raid5f", 00:16:42.486 "superblock": false, 00:16:42.486 "num_base_bdevs": 4, 00:16:42.486 "num_base_bdevs_discovered": 4, 00:16:42.486 "num_base_bdevs_operational": 4, 00:16:42.486 "process": { 00:16:42.486 "type": "rebuild", 00:16:42.486 "target": "spare", 00:16:42.486 "progress": { 00:16:42.486 "blocks": 174720, 00:16:42.486 "percent": 88 00:16:42.486 } 00:16:42.486 }, 00:16:42.486 "base_bdevs_list": [ 00:16:42.486 { 00:16:42.486 "name": "spare", 00:16:42.486 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:42.486 "is_configured": true, 00:16:42.486 "data_offset": 0, 00:16:42.486 "data_size": 65536 00:16:42.486 }, 00:16:42.486 { 00:16:42.486 "name": "BaseBdev2", 00:16:42.486 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:42.486 "is_configured": true, 00:16:42.486 "data_offset": 0, 00:16:42.486 "data_size": 65536 00:16:42.486 }, 00:16:42.486 { 00:16:42.486 "name": "BaseBdev3", 00:16:42.486 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:42.486 "is_configured": true, 00:16:42.486 "data_offset": 0, 00:16:42.486 "data_size": 65536 00:16:42.487 }, 00:16:42.487 { 00:16:42.487 "name": "BaseBdev4", 00:16:42.487 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:42.487 "is_configured": true, 00:16:42.487 "data_offset": 0, 00:16:42.487 "data_size": 65536 00:16:42.487 } 00:16:42.487 ] 00:16:42.487 }' 00:16:42.487 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.487 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.487 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.487 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.487 08:53:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.870 [2024-10-05 08:53:19.906326] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:43.870 [2024-10-05 08:53:19.906436] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.870 [2024-10-05 08:53:19.906516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.870 "name": "raid_bdev1", 00:16:43.870 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:43.870 "strip_size_kb": 64, 00:16:43.870 "state": "online", 00:16:43.870 "raid_level": "raid5f", 00:16:43.870 "superblock": false, 00:16:43.870 "num_base_bdevs": 4, 00:16:43.870 "num_base_bdevs_discovered": 4, 00:16:43.870 "num_base_bdevs_operational": 4, 00:16:43.870 "base_bdevs_list": [ 00:16:43.870 { 00:16:43.870 "name": "spare", 00:16:43.870 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:43.870 "is_configured": true, 00:16:43.870 "data_offset": 0, 00:16:43.870 "data_size": 65536 00:16:43.870 }, 00:16:43.870 { 00:16:43.870 "name": "BaseBdev2", 00:16:43.870 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:43.870 "is_configured": true, 00:16:43.870 "data_offset": 0, 00:16:43.870 "data_size": 65536 00:16:43.870 }, 00:16:43.870 { 00:16:43.870 "name": "BaseBdev3", 00:16:43.870 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:43.870 "is_configured": true, 00:16:43.870 "data_offset": 0, 00:16:43.870 "data_size": 65536 00:16:43.870 }, 00:16:43.870 { 00:16:43.870 "name": "BaseBdev4", 00:16:43.870 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:43.870 "is_configured": true, 00:16:43.870 "data_offset": 0, 00:16:43.870 "data_size": 65536 00:16:43.870 } 00:16:43.870 ] 00:16:43.870 }' 00:16:43.870 08:53:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.870 "name": "raid_bdev1", 00:16:43.870 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:43.870 "strip_size_kb": 64, 00:16:43.870 "state": "online", 00:16:43.870 "raid_level": "raid5f", 00:16:43.870 "superblock": false, 00:16:43.870 "num_base_bdevs": 4, 00:16:43.870 "num_base_bdevs_discovered": 4, 00:16:43.870 "num_base_bdevs_operational": 4, 00:16:43.870 "base_bdevs_list": [ 00:16:43.870 { 00:16:43.870 "name": "spare", 00:16:43.870 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:43.870 "is_configured": true, 00:16:43.870 "data_offset": 0, 00:16:43.870 "data_size": 65536 00:16:43.870 }, 00:16:43.870 { 00:16:43.870 "name": "BaseBdev2", 00:16:43.870 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:43.870 "is_configured": true, 00:16:43.870 "data_offset": 0, 00:16:43.870 "data_size": 65536 00:16:43.870 }, 00:16:43.870 { 00:16:43.870 "name": "BaseBdev3", 00:16:43.870 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:43.870 "is_configured": true, 00:16:43.870 "data_offset": 0, 00:16:43.870 "data_size": 65536 00:16:43.870 }, 00:16:43.870 { 00:16:43.870 "name": "BaseBdev4", 00:16:43.870 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:43.870 "is_configured": true, 00:16:43.870 "data_offset": 0, 00:16:43.870 "data_size": 65536 00:16:43.870 } 00:16:43.870 ] 00:16:43.870 }' 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.870 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.871 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.871 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.871 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.871 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.871 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.871 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.871 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.871 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.871 "name": "raid_bdev1", 00:16:43.871 "uuid": "04edbe95-cf76-445d-9871-fcf6d7ab9f00", 00:16:43.871 "strip_size_kb": 64, 00:16:43.871 "state": "online", 00:16:43.871 "raid_level": "raid5f", 00:16:43.871 "superblock": false, 00:16:43.871 "num_base_bdevs": 4, 00:16:43.871 "num_base_bdevs_discovered": 4, 00:16:43.871 "num_base_bdevs_operational": 4, 00:16:43.871 "base_bdevs_list": [ 00:16:43.871 { 00:16:43.871 "name": "spare", 00:16:43.871 "uuid": "b4744bc6-32fc-50c7-aea1-2c2538070a96", 00:16:43.871 "is_configured": true, 00:16:43.871 "data_offset": 0, 00:16:43.871 "data_size": 65536 00:16:43.871 }, 00:16:43.871 { 00:16:43.871 "name": "BaseBdev2", 00:16:43.871 "uuid": "112fd2ce-711d-55f2-9deb-57bab17df7e0", 00:16:43.871 "is_configured": true, 00:16:43.871 "data_offset": 0, 00:16:43.871 "data_size": 65536 00:16:43.871 }, 00:16:43.871 { 00:16:43.871 "name": "BaseBdev3", 00:16:43.871 "uuid": "b0045d43-d42a-5647-9490-edf84cf3ceba", 00:16:43.871 "is_configured": true, 00:16:43.871 "data_offset": 0, 00:16:43.871 "data_size": 65536 00:16:43.871 }, 00:16:43.871 { 00:16:43.871 "name": "BaseBdev4", 00:16:43.871 "uuid": "9b9aedaf-a353-5df8-9a89-9a3cf93f0ad5", 00:16:43.871 "is_configured": true, 00:16:43.871 "data_offset": 0, 00:16:43.871 "data_size": 65536 00:16:43.871 } 00:16:43.871 ] 00:16:43.871 }' 00:16:43.871 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.871 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.441 [2024-10-05 08:53:20.633215] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.441 [2024-10-05 08:53:20.633288] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.441 [2024-10-05 08:53:20.633393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.441 [2024-10-05 08:53:20.633502] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.441 [2024-10-05 08:53:20.633582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:44.441 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:44.441 /dev/nbd0 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.701 1+0 records in 00:16:44.701 1+0 records out 00:16:44.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272327 s, 15.0 MB/s 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:44.701 08:53:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:44.701 /dev/nbd1 00:16:44.701 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.962 1+0 records in 00:16:44.962 1+0 records out 00:16:44.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460155 s, 8.9 MB/s 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.962 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:45.222 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:45.222 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:45.222 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:45.222 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.222 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.222 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:45.222 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:45.222 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.222 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.222 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81277 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 81277 ']' 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 81277 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81277 00:16:45.482 killing process with pid 81277 00:16:45.482 Received shutdown signal, test time was about 60.000000 seconds 00:16:45.482 00:16:45.482 Latency(us) 00:16:45.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.482 =================================================================================================================== 00:16:45.482 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81277' 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 81277 00:16:45.482 [2024-10-05 08:53:21.829935] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:45.482 08:53:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 81277 00:16:46.053 [2024-10-05 08:53:22.286527] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.994 08:53:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:46.994 00:16:46.994 real 0m19.034s 00:16:46.994 user 0m22.763s 00:16:46.994 sys 0m2.393s 00:16:46.994 08:53:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:46.994 08:53:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.994 ************************************ 00:16:46.994 END TEST raid5f_rebuild_test 00:16:46.994 ************************************ 00:16:47.254 08:53:23 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:47.254 08:53:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:47.254 08:53:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:47.254 08:53:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.254 ************************************ 00:16:47.254 START TEST raid5f_rebuild_test_sb 00:16:47.254 ************************************ 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81667 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81667 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81667 ']' 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.254 08:53:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.254 [2024-10-05 08:53:23.652943] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:16:47.254 [2024-10-05 08:53:23.653186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81667 ] 00:16:47.254 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:47.254 Zero copy mechanism will not be used. 00:16:47.514 [2024-10-05 08:53:23.822182] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.774 [2024-10-05 08:53:24.022713] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.774 [2024-10-05 08:53:24.212088] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.774 [2024-10-05 08:53:24.212214] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.034 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.034 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:48.034 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:48.034 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:48.034 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.034 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.035 BaseBdev1_malloc 00:16:48.035 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.035 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:48.035 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.035 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.035 [2024-10-05 08:53:24.503376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:48.035 [2024-10-05 08:53:24.503487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.035 [2024-10-05 08:53:24.503526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:48.035 [2024-10-05 08:53:24.503560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.296 [2024-10-05 08:53:24.505621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.296 [2024-10-05 08:53:24.505699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:48.296 BaseBdev1 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.296 BaseBdev2_malloc 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.296 [2024-10-05 08:53:24.585005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:48.296 [2024-10-05 08:53:24.585109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.296 [2024-10-05 08:53:24.585162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:48.296 [2024-10-05 08:53:24.585175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.296 [2024-10-05 08:53:24.587147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.296 [2024-10-05 08:53:24.587186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:48.296 BaseBdev2 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.296 BaseBdev3_malloc 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.296 [2024-10-05 08:53:24.638659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:48.296 [2024-10-05 08:53:24.638750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.296 [2024-10-05 08:53:24.638784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:48.296 [2024-10-05 08:53:24.638814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.296 [2024-10-05 08:53:24.640676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.296 [2024-10-05 08:53:24.640750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:48.296 BaseBdev3 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.296 BaseBdev4_malloc 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.296 [2024-10-05 08:53:24.691389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:48.296 [2024-10-05 08:53:24.691477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.296 [2024-10-05 08:53:24.691513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:48.296 [2024-10-05 08:53:24.691542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.296 [2024-10-05 08:53:24.693506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.296 [2024-10-05 08:53:24.693581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:48.296 BaseBdev4 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.296 spare_malloc 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.296 spare_delay 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.296 [2024-10-05 08:53:24.756906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:48.296 [2024-10-05 08:53:24.757015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.296 [2024-10-05 08:53:24.757051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:48.296 [2024-10-05 08:53:24.757080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.296 [2024-10-05 08:53:24.759066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.296 [2024-10-05 08:53:24.759137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:48.296 spare 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.296 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.556 [2024-10-05 08:53:24.768958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.557 [2024-10-05 08:53:24.770734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.557 [2024-10-05 08:53:24.770835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:48.557 [2024-10-05 08:53:24.770906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:48.557 [2024-10-05 08:53:24.771134] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:48.557 [2024-10-05 08:53:24.771183] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:48.557 [2024-10-05 08:53:24.771438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:48.557 [2024-10-05 08:53:24.778626] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:48.557 [2024-10-05 08:53:24.778678] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:48.557 [2024-10-05 08:53:24.778881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.557 "name": "raid_bdev1", 00:16:48.557 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:16:48.557 "strip_size_kb": 64, 00:16:48.557 "state": "online", 00:16:48.557 "raid_level": "raid5f", 00:16:48.557 "superblock": true, 00:16:48.557 "num_base_bdevs": 4, 00:16:48.557 "num_base_bdevs_discovered": 4, 00:16:48.557 "num_base_bdevs_operational": 4, 00:16:48.557 "base_bdevs_list": [ 00:16:48.557 { 00:16:48.557 "name": "BaseBdev1", 00:16:48.557 "uuid": "b1a9ebd6-3874-581a-9b52-84f6d8702212", 00:16:48.557 "is_configured": true, 00:16:48.557 "data_offset": 2048, 00:16:48.557 "data_size": 63488 00:16:48.557 }, 00:16:48.557 { 00:16:48.557 "name": "BaseBdev2", 00:16:48.557 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:16:48.557 "is_configured": true, 00:16:48.557 "data_offset": 2048, 00:16:48.557 "data_size": 63488 00:16:48.557 }, 00:16:48.557 { 00:16:48.557 "name": "BaseBdev3", 00:16:48.557 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:16:48.557 "is_configured": true, 00:16:48.557 "data_offset": 2048, 00:16:48.557 "data_size": 63488 00:16:48.557 }, 00:16:48.557 { 00:16:48.557 "name": "BaseBdev4", 00:16:48.557 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:16:48.557 "is_configured": true, 00:16:48.557 "data_offset": 2048, 00:16:48.557 "data_size": 63488 00:16:48.557 } 00:16:48.557 ] 00:16:48.557 }' 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.557 08:53:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.817 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.817 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:48.817 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.817 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.817 [2024-10-05 08:53:25.234160] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.817 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.817 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:48.817 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.817 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:48.817 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.817 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:49.076 [2024-10-05 08:53:25.497520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:49.076 /dev/nbd0 00:16:49.076 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.335 1+0 records in 00:16:49.335 1+0 records out 00:16:49.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431849 s, 9.5 MB/s 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:49.335 08:53:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:49.905 496+0 records in 00:16:49.905 496+0 records out 00:16:49.905 97517568 bytes (98 MB, 93 MiB) copied, 0.570405 s, 171 MB/s 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:49.905 [2024-10-05 08:53:26.358194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.905 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.905 [2024-10-05 08:53:26.374781] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.165 "name": "raid_bdev1", 00:16:50.165 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:16:50.165 "strip_size_kb": 64, 00:16:50.165 "state": "online", 00:16:50.165 "raid_level": "raid5f", 00:16:50.165 "superblock": true, 00:16:50.165 "num_base_bdevs": 4, 00:16:50.165 "num_base_bdevs_discovered": 3, 00:16:50.165 "num_base_bdevs_operational": 3, 00:16:50.165 "base_bdevs_list": [ 00:16:50.165 { 00:16:50.165 "name": null, 00:16:50.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.165 "is_configured": false, 00:16:50.165 "data_offset": 0, 00:16:50.165 "data_size": 63488 00:16:50.165 }, 00:16:50.165 { 00:16:50.165 "name": "BaseBdev2", 00:16:50.165 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:16:50.165 "is_configured": true, 00:16:50.165 "data_offset": 2048, 00:16:50.165 "data_size": 63488 00:16:50.165 }, 00:16:50.165 { 00:16:50.165 "name": "BaseBdev3", 00:16:50.165 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:16:50.165 "is_configured": true, 00:16:50.165 "data_offset": 2048, 00:16:50.165 "data_size": 63488 00:16:50.165 }, 00:16:50.165 { 00:16:50.165 "name": "BaseBdev4", 00:16:50.165 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:16:50.165 "is_configured": true, 00:16:50.165 "data_offset": 2048, 00:16:50.165 "data_size": 63488 00:16:50.165 } 00:16:50.165 ] 00:16:50.165 }' 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.165 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.425 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:50.425 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.425 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.425 [2024-10-05 08:53:26.834032] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:50.425 [2024-10-05 08:53:26.847692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:50.425 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.425 08:53:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:50.425 [2024-10-05 08:53:26.856584] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.806 "name": "raid_bdev1", 00:16:51.806 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:16:51.806 "strip_size_kb": 64, 00:16:51.806 "state": "online", 00:16:51.806 "raid_level": "raid5f", 00:16:51.806 "superblock": true, 00:16:51.806 "num_base_bdevs": 4, 00:16:51.806 "num_base_bdevs_discovered": 4, 00:16:51.806 "num_base_bdevs_operational": 4, 00:16:51.806 "process": { 00:16:51.806 "type": "rebuild", 00:16:51.806 "target": "spare", 00:16:51.806 "progress": { 00:16:51.806 "blocks": 19200, 00:16:51.806 "percent": 10 00:16:51.806 } 00:16:51.806 }, 00:16:51.806 "base_bdevs_list": [ 00:16:51.806 { 00:16:51.806 "name": "spare", 00:16:51.806 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:16:51.806 "is_configured": true, 00:16:51.806 "data_offset": 2048, 00:16:51.806 "data_size": 63488 00:16:51.806 }, 00:16:51.806 { 00:16:51.806 "name": "BaseBdev2", 00:16:51.806 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:16:51.806 "is_configured": true, 00:16:51.806 "data_offset": 2048, 00:16:51.806 "data_size": 63488 00:16:51.806 }, 00:16:51.806 { 00:16:51.806 "name": "BaseBdev3", 00:16:51.806 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:16:51.806 "is_configured": true, 00:16:51.806 "data_offset": 2048, 00:16:51.806 "data_size": 63488 00:16:51.806 }, 00:16:51.806 { 00:16:51.806 "name": "BaseBdev4", 00:16:51.806 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:16:51.806 "is_configured": true, 00:16:51.806 "data_offset": 2048, 00:16:51.806 "data_size": 63488 00:16:51.806 } 00:16:51.806 ] 00:16:51.806 }' 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.806 08:53:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.806 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.806 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:51.806 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.806 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.806 [2024-10-05 08:53:28.007229] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.806 [2024-10-05 08:53:28.062125] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:51.806 [2024-10-05 08:53:28.062184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.806 [2024-10-05 08:53:28.062200] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:51.806 [2024-10-05 08:53:28.062210] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:51.806 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.806 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:51.806 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.806 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.807 "name": "raid_bdev1", 00:16:51.807 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:16:51.807 "strip_size_kb": 64, 00:16:51.807 "state": "online", 00:16:51.807 "raid_level": "raid5f", 00:16:51.807 "superblock": true, 00:16:51.807 "num_base_bdevs": 4, 00:16:51.807 "num_base_bdevs_discovered": 3, 00:16:51.807 "num_base_bdevs_operational": 3, 00:16:51.807 "base_bdevs_list": [ 00:16:51.807 { 00:16:51.807 "name": null, 00:16:51.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.807 "is_configured": false, 00:16:51.807 "data_offset": 0, 00:16:51.807 "data_size": 63488 00:16:51.807 }, 00:16:51.807 { 00:16:51.807 "name": "BaseBdev2", 00:16:51.807 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:16:51.807 "is_configured": true, 00:16:51.807 "data_offset": 2048, 00:16:51.807 "data_size": 63488 00:16:51.807 }, 00:16:51.807 { 00:16:51.807 "name": "BaseBdev3", 00:16:51.807 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:16:51.807 "is_configured": true, 00:16:51.807 "data_offset": 2048, 00:16:51.807 "data_size": 63488 00:16:51.807 }, 00:16:51.807 { 00:16:51.807 "name": "BaseBdev4", 00:16:51.807 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:16:51.807 "is_configured": true, 00:16:51.807 "data_offset": 2048, 00:16:51.807 "data_size": 63488 00:16:51.807 } 00:16:51.807 ] 00:16:51.807 }' 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.807 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.065 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.065 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.065 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.065 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.065 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.323 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.323 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.323 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.323 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.323 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.324 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.324 "name": "raid_bdev1", 00:16:52.324 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:16:52.324 "strip_size_kb": 64, 00:16:52.324 "state": "online", 00:16:52.324 "raid_level": "raid5f", 00:16:52.324 "superblock": true, 00:16:52.324 "num_base_bdevs": 4, 00:16:52.324 "num_base_bdevs_discovered": 3, 00:16:52.324 "num_base_bdevs_operational": 3, 00:16:52.324 "base_bdevs_list": [ 00:16:52.324 { 00:16:52.324 "name": null, 00:16:52.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.324 "is_configured": false, 00:16:52.324 "data_offset": 0, 00:16:52.324 "data_size": 63488 00:16:52.324 }, 00:16:52.324 { 00:16:52.324 "name": "BaseBdev2", 00:16:52.324 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:16:52.324 "is_configured": true, 00:16:52.324 "data_offset": 2048, 00:16:52.324 "data_size": 63488 00:16:52.324 }, 00:16:52.324 { 00:16:52.324 "name": "BaseBdev3", 00:16:52.324 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:16:52.324 "is_configured": true, 00:16:52.324 "data_offset": 2048, 00:16:52.324 "data_size": 63488 00:16:52.324 }, 00:16:52.324 { 00:16:52.324 "name": "BaseBdev4", 00:16:52.324 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:16:52.324 "is_configured": true, 00:16:52.324 "data_offset": 2048, 00:16:52.324 "data_size": 63488 00:16:52.324 } 00:16:52.324 ] 00:16:52.324 }' 00:16:52.324 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.324 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.324 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.324 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:52.324 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:52.324 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.324 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.324 [2024-10-05 08:53:28.688588] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:52.324 [2024-10-05 08:53:28.701935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:52.324 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.324 08:53:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:52.324 [2024-10-05 08:53:28.711085] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:53.262 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.262 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.262 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.262 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.262 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.262 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.262 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.262 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.262 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.536 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.536 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.536 "name": "raid_bdev1", 00:16:53.536 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:16:53.536 "strip_size_kb": 64, 00:16:53.536 "state": "online", 00:16:53.536 "raid_level": "raid5f", 00:16:53.536 "superblock": true, 00:16:53.536 "num_base_bdevs": 4, 00:16:53.536 "num_base_bdevs_discovered": 4, 00:16:53.536 "num_base_bdevs_operational": 4, 00:16:53.536 "process": { 00:16:53.536 "type": "rebuild", 00:16:53.536 "target": "spare", 00:16:53.536 "progress": { 00:16:53.536 "blocks": 19200, 00:16:53.536 "percent": 10 00:16:53.536 } 00:16:53.536 }, 00:16:53.536 "base_bdevs_list": [ 00:16:53.536 { 00:16:53.536 "name": "spare", 00:16:53.536 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:16:53.537 "is_configured": true, 00:16:53.537 "data_offset": 2048, 00:16:53.537 "data_size": 63488 00:16:53.537 }, 00:16:53.537 { 00:16:53.537 "name": "BaseBdev2", 00:16:53.537 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:16:53.537 "is_configured": true, 00:16:53.537 "data_offset": 2048, 00:16:53.537 "data_size": 63488 00:16:53.537 }, 00:16:53.537 { 00:16:53.537 "name": "BaseBdev3", 00:16:53.537 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:16:53.537 "is_configured": true, 00:16:53.537 "data_offset": 2048, 00:16:53.537 "data_size": 63488 00:16:53.537 }, 00:16:53.537 { 00:16:53.537 "name": "BaseBdev4", 00:16:53.537 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:16:53.537 "is_configured": true, 00:16:53.537 "data_offset": 2048, 00:16:53.537 "data_size": 63488 00:16:53.537 } 00:16:53.537 ] 00:16:53.537 }' 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:53.537 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=641 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.537 "name": "raid_bdev1", 00:16:53.537 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:16:53.537 "strip_size_kb": 64, 00:16:53.537 "state": "online", 00:16:53.537 "raid_level": "raid5f", 00:16:53.537 "superblock": true, 00:16:53.537 "num_base_bdevs": 4, 00:16:53.537 "num_base_bdevs_discovered": 4, 00:16:53.537 "num_base_bdevs_operational": 4, 00:16:53.537 "process": { 00:16:53.537 "type": "rebuild", 00:16:53.537 "target": "spare", 00:16:53.537 "progress": { 00:16:53.537 "blocks": 21120, 00:16:53.537 "percent": 11 00:16:53.537 } 00:16:53.537 }, 00:16:53.537 "base_bdevs_list": [ 00:16:53.537 { 00:16:53.537 "name": "spare", 00:16:53.537 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:16:53.537 "is_configured": true, 00:16:53.537 "data_offset": 2048, 00:16:53.537 "data_size": 63488 00:16:53.537 }, 00:16:53.537 { 00:16:53.537 "name": "BaseBdev2", 00:16:53.537 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:16:53.537 "is_configured": true, 00:16:53.537 "data_offset": 2048, 00:16:53.537 "data_size": 63488 00:16:53.537 }, 00:16:53.537 { 00:16:53.537 "name": "BaseBdev3", 00:16:53.537 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:16:53.537 "is_configured": true, 00:16:53.537 "data_offset": 2048, 00:16:53.537 "data_size": 63488 00:16:53.537 }, 00:16:53.537 { 00:16:53.537 "name": "BaseBdev4", 00:16:53.537 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:16:53.537 "is_configured": true, 00:16:53.537 "data_offset": 2048, 00:16:53.537 "data_size": 63488 00:16:53.537 } 00:16:53.537 ] 00:16:53.537 }' 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.537 08:53:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.944 08:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.944 08:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.944 08:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.944 08:53:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.944 "name": "raid_bdev1", 00:16:54.944 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:16:54.944 "strip_size_kb": 64, 00:16:54.944 "state": "online", 00:16:54.944 "raid_level": "raid5f", 00:16:54.944 "superblock": true, 00:16:54.944 "num_base_bdevs": 4, 00:16:54.944 "num_base_bdevs_discovered": 4, 00:16:54.944 "num_base_bdevs_operational": 4, 00:16:54.944 "process": { 00:16:54.944 "type": "rebuild", 00:16:54.944 "target": "spare", 00:16:54.944 "progress": { 00:16:54.944 "blocks": 42240, 00:16:54.944 "percent": 22 00:16:54.944 } 00:16:54.944 }, 00:16:54.944 "base_bdevs_list": [ 00:16:54.944 { 00:16:54.944 "name": "spare", 00:16:54.944 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:16:54.944 "is_configured": true, 00:16:54.944 "data_offset": 2048, 00:16:54.944 "data_size": 63488 00:16:54.944 }, 00:16:54.944 { 00:16:54.944 "name": "BaseBdev2", 00:16:54.944 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:16:54.944 "is_configured": true, 00:16:54.944 "data_offset": 2048, 00:16:54.944 "data_size": 63488 00:16:54.944 }, 00:16:54.944 { 00:16:54.944 "name": "BaseBdev3", 00:16:54.944 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:16:54.944 "is_configured": true, 00:16:54.944 "data_offset": 2048, 00:16:54.944 "data_size": 63488 00:16:54.944 }, 00:16:54.944 { 00:16:54.944 "name": "BaseBdev4", 00:16:54.944 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:16:54.944 "is_configured": true, 00:16:54.944 "data_offset": 2048, 00:16:54.944 "data_size": 63488 00:16:54.944 } 00:16:54.944 ] 00:16:54.944 }' 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.944 08:53:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.883 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.883 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.883 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.883 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.884 "name": "raid_bdev1", 00:16:55.884 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:16:55.884 "strip_size_kb": 64, 00:16:55.884 "state": "online", 00:16:55.884 "raid_level": "raid5f", 00:16:55.884 "superblock": true, 00:16:55.884 "num_base_bdevs": 4, 00:16:55.884 "num_base_bdevs_discovered": 4, 00:16:55.884 "num_base_bdevs_operational": 4, 00:16:55.884 "process": { 00:16:55.884 "type": "rebuild", 00:16:55.884 "target": "spare", 00:16:55.884 "progress": { 00:16:55.884 "blocks": 65280, 00:16:55.884 "percent": 34 00:16:55.884 } 00:16:55.884 }, 00:16:55.884 "base_bdevs_list": [ 00:16:55.884 { 00:16:55.884 "name": "spare", 00:16:55.884 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:16:55.884 "is_configured": true, 00:16:55.884 "data_offset": 2048, 00:16:55.884 "data_size": 63488 00:16:55.884 }, 00:16:55.884 { 00:16:55.884 "name": "BaseBdev2", 00:16:55.884 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:16:55.884 "is_configured": true, 00:16:55.884 "data_offset": 2048, 00:16:55.884 "data_size": 63488 00:16:55.884 }, 00:16:55.884 { 00:16:55.884 "name": "BaseBdev3", 00:16:55.884 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:16:55.884 "is_configured": true, 00:16:55.884 "data_offset": 2048, 00:16:55.884 "data_size": 63488 00:16:55.884 }, 00:16:55.884 { 00:16:55.884 "name": "BaseBdev4", 00:16:55.884 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:16:55.884 "is_configured": true, 00:16:55.884 "data_offset": 2048, 00:16:55.884 "data_size": 63488 00:16:55.884 } 00:16:55.884 ] 00:16:55.884 }' 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.884 08:53:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.265 "name": "raid_bdev1", 00:16:57.265 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:16:57.265 "strip_size_kb": 64, 00:16:57.265 "state": "online", 00:16:57.265 "raid_level": "raid5f", 00:16:57.265 "superblock": true, 00:16:57.265 "num_base_bdevs": 4, 00:16:57.265 "num_base_bdevs_discovered": 4, 00:16:57.265 "num_base_bdevs_operational": 4, 00:16:57.265 "process": { 00:16:57.265 "type": "rebuild", 00:16:57.265 "target": "spare", 00:16:57.265 "progress": { 00:16:57.265 "blocks": 86400, 00:16:57.265 "percent": 45 00:16:57.265 } 00:16:57.265 }, 00:16:57.265 "base_bdevs_list": [ 00:16:57.265 { 00:16:57.265 "name": "spare", 00:16:57.265 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:16:57.265 "is_configured": true, 00:16:57.265 "data_offset": 2048, 00:16:57.265 "data_size": 63488 00:16:57.265 }, 00:16:57.265 { 00:16:57.265 "name": "BaseBdev2", 00:16:57.265 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:16:57.265 "is_configured": true, 00:16:57.265 "data_offset": 2048, 00:16:57.265 "data_size": 63488 00:16:57.265 }, 00:16:57.265 { 00:16:57.265 "name": "BaseBdev3", 00:16:57.265 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:16:57.265 "is_configured": true, 00:16:57.265 "data_offset": 2048, 00:16:57.265 "data_size": 63488 00:16:57.265 }, 00:16:57.265 { 00:16:57.265 "name": "BaseBdev4", 00:16:57.265 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:16:57.265 "is_configured": true, 00:16:57.265 "data_offset": 2048, 00:16:57.265 "data_size": 63488 00:16:57.265 } 00:16:57.265 ] 00:16:57.265 }' 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.265 08:53:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.214 "name": "raid_bdev1", 00:16:58.214 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:16:58.214 "strip_size_kb": 64, 00:16:58.214 "state": "online", 00:16:58.214 "raid_level": "raid5f", 00:16:58.214 "superblock": true, 00:16:58.214 "num_base_bdevs": 4, 00:16:58.214 "num_base_bdevs_discovered": 4, 00:16:58.214 "num_base_bdevs_operational": 4, 00:16:58.214 "process": { 00:16:58.214 "type": "rebuild", 00:16:58.214 "target": "spare", 00:16:58.214 "progress": { 00:16:58.214 "blocks": 109440, 00:16:58.214 "percent": 57 00:16:58.214 } 00:16:58.214 }, 00:16:58.214 "base_bdevs_list": [ 00:16:58.214 { 00:16:58.214 "name": "spare", 00:16:58.214 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:16:58.214 "is_configured": true, 00:16:58.214 "data_offset": 2048, 00:16:58.214 "data_size": 63488 00:16:58.214 }, 00:16:58.214 { 00:16:58.214 "name": "BaseBdev2", 00:16:58.214 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:16:58.214 "is_configured": true, 00:16:58.214 "data_offset": 2048, 00:16:58.214 "data_size": 63488 00:16:58.214 }, 00:16:58.214 { 00:16:58.214 "name": "BaseBdev3", 00:16:58.214 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:16:58.214 "is_configured": true, 00:16:58.214 "data_offset": 2048, 00:16:58.214 "data_size": 63488 00:16:58.214 }, 00:16:58.214 { 00:16:58.214 "name": "BaseBdev4", 00:16:58.214 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:16:58.214 "is_configured": true, 00:16:58.214 "data_offset": 2048, 00:16:58.214 "data_size": 63488 00:16:58.214 } 00:16:58.214 ] 00:16:58.214 }' 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.214 08:53:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.154 "name": "raid_bdev1", 00:16:59.154 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:16:59.154 "strip_size_kb": 64, 00:16:59.154 "state": "online", 00:16:59.154 "raid_level": "raid5f", 00:16:59.154 "superblock": true, 00:16:59.154 "num_base_bdevs": 4, 00:16:59.154 "num_base_bdevs_discovered": 4, 00:16:59.154 "num_base_bdevs_operational": 4, 00:16:59.154 "process": { 00:16:59.154 "type": "rebuild", 00:16:59.154 "target": "spare", 00:16:59.154 "progress": { 00:16:59.154 "blocks": 130560, 00:16:59.154 "percent": 68 00:16:59.154 } 00:16:59.154 }, 00:16:59.154 "base_bdevs_list": [ 00:16:59.154 { 00:16:59.154 "name": "spare", 00:16:59.154 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:16:59.154 "is_configured": true, 00:16:59.154 "data_offset": 2048, 00:16:59.154 "data_size": 63488 00:16:59.154 }, 00:16:59.154 { 00:16:59.154 "name": "BaseBdev2", 00:16:59.154 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:16:59.154 "is_configured": true, 00:16:59.154 "data_offset": 2048, 00:16:59.154 "data_size": 63488 00:16:59.154 }, 00:16:59.154 { 00:16:59.154 "name": "BaseBdev3", 00:16:59.154 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:16:59.154 "is_configured": true, 00:16:59.154 "data_offset": 2048, 00:16:59.154 "data_size": 63488 00:16:59.154 }, 00:16:59.154 { 00:16:59.154 "name": "BaseBdev4", 00:16:59.154 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:16:59.154 "is_configured": true, 00:16:59.154 "data_offset": 2048, 00:16:59.154 "data_size": 63488 00:16:59.154 } 00:16:59.154 ] 00:16:59.154 }' 00:16:59.154 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.414 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.414 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.414 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.414 08:53:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.354 "name": "raid_bdev1", 00:17:00.354 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:00.354 "strip_size_kb": 64, 00:17:00.354 "state": "online", 00:17:00.354 "raid_level": "raid5f", 00:17:00.354 "superblock": true, 00:17:00.354 "num_base_bdevs": 4, 00:17:00.354 "num_base_bdevs_discovered": 4, 00:17:00.354 "num_base_bdevs_operational": 4, 00:17:00.354 "process": { 00:17:00.354 "type": "rebuild", 00:17:00.354 "target": "spare", 00:17:00.354 "progress": { 00:17:00.354 "blocks": 151680, 00:17:00.354 "percent": 79 00:17:00.354 } 00:17:00.354 }, 00:17:00.354 "base_bdevs_list": [ 00:17:00.354 { 00:17:00.354 "name": "spare", 00:17:00.354 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:17:00.354 "is_configured": true, 00:17:00.354 "data_offset": 2048, 00:17:00.354 "data_size": 63488 00:17:00.354 }, 00:17:00.354 { 00:17:00.354 "name": "BaseBdev2", 00:17:00.354 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:00.354 "is_configured": true, 00:17:00.354 "data_offset": 2048, 00:17:00.354 "data_size": 63488 00:17:00.354 }, 00:17:00.354 { 00:17:00.354 "name": "BaseBdev3", 00:17:00.354 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:00.354 "is_configured": true, 00:17:00.354 "data_offset": 2048, 00:17:00.354 "data_size": 63488 00:17:00.354 }, 00:17:00.354 { 00:17:00.354 "name": "BaseBdev4", 00:17:00.354 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:00.354 "is_configured": true, 00:17:00.354 "data_offset": 2048, 00:17:00.354 "data_size": 63488 00:17:00.354 } 00:17:00.354 ] 00:17:00.354 }' 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.354 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.615 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.615 08:53:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.556 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.556 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.556 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.556 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.556 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.556 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.556 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.556 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.556 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.556 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.556 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.556 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.556 "name": "raid_bdev1", 00:17:01.556 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:01.556 "strip_size_kb": 64, 00:17:01.556 "state": "online", 00:17:01.556 "raid_level": "raid5f", 00:17:01.556 "superblock": true, 00:17:01.556 "num_base_bdevs": 4, 00:17:01.556 "num_base_bdevs_discovered": 4, 00:17:01.556 "num_base_bdevs_operational": 4, 00:17:01.556 "process": { 00:17:01.556 "type": "rebuild", 00:17:01.556 "target": "spare", 00:17:01.556 "progress": { 00:17:01.556 "blocks": 174720, 00:17:01.556 "percent": 91 00:17:01.556 } 00:17:01.556 }, 00:17:01.556 "base_bdevs_list": [ 00:17:01.556 { 00:17:01.556 "name": "spare", 00:17:01.556 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:17:01.556 "is_configured": true, 00:17:01.556 "data_offset": 2048, 00:17:01.556 "data_size": 63488 00:17:01.556 }, 00:17:01.557 { 00:17:01.557 "name": "BaseBdev2", 00:17:01.557 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:01.557 "is_configured": true, 00:17:01.557 "data_offset": 2048, 00:17:01.557 "data_size": 63488 00:17:01.557 }, 00:17:01.557 { 00:17:01.557 "name": "BaseBdev3", 00:17:01.557 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:01.557 "is_configured": true, 00:17:01.557 "data_offset": 2048, 00:17:01.557 "data_size": 63488 00:17:01.557 }, 00:17:01.557 { 00:17:01.557 "name": "BaseBdev4", 00:17:01.557 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:01.557 "is_configured": true, 00:17:01.557 "data_offset": 2048, 00:17:01.557 "data_size": 63488 00:17:01.557 } 00:17:01.557 ] 00:17:01.557 }' 00:17:01.557 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.557 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.557 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.557 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.557 08:53:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:02.491 [2024-10-05 08:53:38.751746] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:02.491 [2024-10-05 08:53:38.751805] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:02.491 [2024-10-05 08:53:38.751920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.751 08:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.751 08:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.751 08:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.751 08:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.751 08:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.751 08:53:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.751 "name": "raid_bdev1", 00:17:02.751 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:02.751 "strip_size_kb": 64, 00:17:02.751 "state": "online", 00:17:02.751 "raid_level": "raid5f", 00:17:02.751 "superblock": true, 00:17:02.751 "num_base_bdevs": 4, 00:17:02.751 "num_base_bdevs_discovered": 4, 00:17:02.751 "num_base_bdevs_operational": 4, 00:17:02.751 "base_bdevs_list": [ 00:17:02.751 { 00:17:02.751 "name": "spare", 00:17:02.751 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:17:02.751 "is_configured": true, 00:17:02.751 "data_offset": 2048, 00:17:02.751 "data_size": 63488 00:17:02.751 }, 00:17:02.751 { 00:17:02.751 "name": "BaseBdev2", 00:17:02.751 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:02.751 "is_configured": true, 00:17:02.751 "data_offset": 2048, 00:17:02.751 "data_size": 63488 00:17:02.751 }, 00:17:02.751 { 00:17:02.751 "name": "BaseBdev3", 00:17:02.751 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:02.751 "is_configured": true, 00:17:02.751 "data_offset": 2048, 00:17:02.751 "data_size": 63488 00:17:02.751 }, 00:17:02.751 { 00:17:02.751 "name": "BaseBdev4", 00:17:02.751 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:02.751 "is_configured": true, 00:17:02.751 "data_offset": 2048, 00:17:02.751 "data_size": 63488 00:17:02.751 } 00:17:02.751 ] 00:17:02.751 }' 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.751 "name": "raid_bdev1", 00:17:02.751 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:02.751 "strip_size_kb": 64, 00:17:02.751 "state": "online", 00:17:02.751 "raid_level": "raid5f", 00:17:02.751 "superblock": true, 00:17:02.751 "num_base_bdevs": 4, 00:17:02.751 "num_base_bdevs_discovered": 4, 00:17:02.751 "num_base_bdevs_operational": 4, 00:17:02.751 "base_bdevs_list": [ 00:17:02.751 { 00:17:02.751 "name": "spare", 00:17:02.751 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:17:02.751 "is_configured": true, 00:17:02.751 "data_offset": 2048, 00:17:02.751 "data_size": 63488 00:17:02.751 }, 00:17:02.751 { 00:17:02.751 "name": "BaseBdev2", 00:17:02.751 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:02.751 "is_configured": true, 00:17:02.751 "data_offset": 2048, 00:17:02.751 "data_size": 63488 00:17:02.751 }, 00:17:02.751 { 00:17:02.751 "name": "BaseBdev3", 00:17:02.751 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:02.751 "is_configured": true, 00:17:02.751 "data_offset": 2048, 00:17:02.751 "data_size": 63488 00:17:02.751 }, 00:17:02.751 { 00:17:02.751 "name": "BaseBdev4", 00:17:02.751 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:02.751 "is_configured": true, 00:17:02.751 "data_offset": 2048, 00:17:02.751 "data_size": 63488 00:17:02.751 } 00:17:02.751 ] 00:17:02.751 }' 00:17:02.751 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.011 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.012 "name": "raid_bdev1", 00:17:03.012 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:03.012 "strip_size_kb": 64, 00:17:03.012 "state": "online", 00:17:03.012 "raid_level": "raid5f", 00:17:03.012 "superblock": true, 00:17:03.012 "num_base_bdevs": 4, 00:17:03.012 "num_base_bdevs_discovered": 4, 00:17:03.012 "num_base_bdevs_operational": 4, 00:17:03.012 "base_bdevs_list": [ 00:17:03.012 { 00:17:03.012 "name": "spare", 00:17:03.012 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:17:03.012 "is_configured": true, 00:17:03.012 "data_offset": 2048, 00:17:03.012 "data_size": 63488 00:17:03.012 }, 00:17:03.012 { 00:17:03.012 "name": "BaseBdev2", 00:17:03.012 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:03.012 "is_configured": true, 00:17:03.012 "data_offset": 2048, 00:17:03.012 "data_size": 63488 00:17:03.012 }, 00:17:03.012 { 00:17:03.012 "name": "BaseBdev3", 00:17:03.012 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:03.012 "is_configured": true, 00:17:03.012 "data_offset": 2048, 00:17:03.012 "data_size": 63488 00:17:03.012 }, 00:17:03.012 { 00:17:03.012 "name": "BaseBdev4", 00:17:03.012 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:03.012 "is_configured": true, 00:17:03.012 "data_offset": 2048, 00:17:03.012 "data_size": 63488 00:17:03.012 } 00:17:03.012 ] 00:17:03.012 }' 00:17:03.012 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.012 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.581 [2024-10-05 08:53:39.789940] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:03.581 [2024-10-05 08:53:39.789987] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.581 [2024-10-05 08:53:39.790059] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.581 [2024-10-05 08:53:39.790144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.581 [2024-10-05 08:53:39.790158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:03.581 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:03.582 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:03.582 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:03.582 08:53:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:03.582 /dev/nbd0 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:03.842 1+0 records in 00:17:03.842 1+0 records out 00:17:03.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244974 s, 16.7 MB/s 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:03.842 /dev/nbd1 00:17:03.842 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:04.102 1+0 records in 00:17:04.102 1+0 records out 00:17:04.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042578 s, 9.6 MB/s 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:04.102 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:04.103 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:04.103 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:04.103 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:04.103 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:04.103 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:04.103 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.103 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:04.363 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:04.363 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:04.363 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:04.363 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.363 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.363 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:04.363 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:04.363 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.363 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:04.363 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.623 [2024-10-05 08:53:40.938117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:04.623 [2024-10-05 08:53:40.938173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.623 [2024-10-05 08:53:40.938194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:04.623 [2024-10-05 08:53:40.938203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.623 [2024-10-05 08:53:40.940393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.623 [2024-10-05 08:53:40.940434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:04.623 [2024-10-05 08:53:40.940513] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:04.623 [2024-10-05 08:53:40.940569] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.623 [2024-10-05 08:53:40.940719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:04.623 [2024-10-05 08:53:40.940802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:04.623 [2024-10-05 08:53:40.940887] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:04.623 spare 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.623 08:53:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.623 [2024-10-05 08:53:41.040801] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:04.623 [2024-10-05 08:53:41.040867] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:04.623 [2024-10-05 08:53:41.041176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:04.623 [2024-10-05 08:53:41.047816] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:04.623 [2024-10-05 08:53:41.047870] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:04.624 [2024-10-05 08:53:41.048072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.624 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.883 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.883 "name": "raid_bdev1", 00:17:04.883 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:04.883 "strip_size_kb": 64, 00:17:04.883 "state": "online", 00:17:04.883 "raid_level": "raid5f", 00:17:04.883 "superblock": true, 00:17:04.883 "num_base_bdevs": 4, 00:17:04.883 "num_base_bdevs_discovered": 4, 00:17:04.883 "num_base_bdevs_operational": 4, 00:17:04.883 "base_bdevs_list": [ 00:17:04.883 { 00:17:04.883 "name": "spare", 00:17:04.883 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:17:04.883 "is_configured": true, 00:17:04.883 "data_offset": 2048, 00:17:04.883 "data_size": 63488 00:17:04.883 }, 00:17:04.883 { 00:17:04.883 "name": "BaseBdev2", 00:17:04.883 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:04.883 "is_configured": true, 00:17:04.883 "data_offset": 2048, 00:17:04.883 "data_size": 63488 00:17:04.883 }, 00:17:04.883 { 00:17:04.883 "name": "BaseBdev3", 00:17:04.883 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:04.883 "is_configured": true, 00:17:04.883 "data_offset": 2048, 00:17:04.883 "data_size": 63488 00:17:04.883 }, 00:17:04.883 { 00:17:04.883 "name": "BaseBdev4", 00:17:04.883 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:04.883 "is_configured": true, 00:17:04.883 "data_offset": 2048, 00:17:04.883 "data_size": 63488 00:17:04.883 } 00:17:04.883 ] 00:17:04.883 }' 00:17:04.884 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.884 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.143 "name": "raid_bdev1", 00:17:05.143 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:05.143 "strip_size_kb": 64, 00:17:05.143 "state": "online", 00:17:05.143 "raid_level": "raid5f", 00:17:05.143 "superblock": true, 00:17:05.143 "num_base_bdevs": 4, 00:17:05.143 "num_base_bdevs_discovered": 4, 00:17:05.143 "num_base_bdevs_operational": 4, 00:17:05.143 "base_bdevs_list": [ 00:17:05.143 { 00:17:05.143 "name": "spare", 00:17:05.143 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:17:05.143 "is_configured": true, 00:17:05.143 "data_offset": 2048, 00:17:05.143 "data_size": 63488 00:17:05.143 }, 00:17:05.143 { 00:17:05.143 "name": "BaseBdev2", 00:17:05.143 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:05.143 "is_configured": true, 00:17:05.143 "data_offset": 2048, 00:17:05.143 "data_size": 63488 00:17:05.143 }, 00:17:05.143 { 00:17:05.143 "name": "BaseBdev3", 00:17:05.143 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:05.143 "is_configured": true, 00:17:05.143 "data_offset": 2048, 00:17:05.143 "data_size": 63488 00:17:05.143 }, 00:17:05.143 { 00:17:05.143 "name": "BaseBdev4", 00:17:05.143 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:05.143 "is_configured": true, 00:17:05.143 "data_offset": 2048, 00:17:05.143 "data_size": 63488 00:17:05.143 } 00:17:05.143 ] 00:17:05.143 }' 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.143 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.403 [2024-10-05 08:53:41.710881] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.403 "name": "raid_bdev1", 00:17:05.403 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:05.403 "strip_size_kb": 64, 00:17:05.403 "state": "online", 00:17:05.403 "raid_level": "raid5f", 00:17:05.403 "superblock": true, 00:17:05.403 "num_base_bdevs": 4, 00:17:05.403 "num_base_bdevs_discovered": 3, 00:17:05.403 "num_base_bdevs_operational": 3, 00:17:05.403 "base_bdevs_list": [ 00:17:05.403 { 00:17:05.403 "name": null, 00:17:05.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.403 "is_configured": false, 00:17:05.403 "data_offset": 0, 00:17:05.403 "data_size": 63488 00:17:05.403 }, 00:17:05.403 { 00:17:05.403 "name": "BaseBdev2", 00:17:05.403 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:05.403 "is_configured": true, 00:17:05.403 "data_offset": 2048, 00:17:05.403 "data_size": 63488 00:17:05.403 }, 00:17:05.403 { 00:17:05.403 "name": "BaseBdev3", 00:17:05.403 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:05.403 "is_configured": true, 00:17:05.403 "data_offset": 2048, 00:17:05.403 "data_size": 63488 00:17:05.403 }, 00:17:05.403 { 00:17:05.403 "name": "BaseBdev4", 00:17:05.403 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:05.403 "is_configured": true, 00:17:05.403 "data_offset": 2048, 00:17:05.403 "data_size": 63488 00:17:05.403 } 00:17:05.403 ] 00:17:05.403 }' 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.403 08:53:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.971 08:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:05.971 08:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.971 08:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.971 [2024-10-05 08:53:42.170107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.971 [2024-10-05 08:53:42.170292] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:05.971 [2024-10-05 08:53:42.170314] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:05.971 [2024-10-05 08:53:42.170352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.971 [2024-10-05 08:53:42.183920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:05.971 08:53:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.971 08:53:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:05.971 [2024-10-05 08:53:42.192720] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.909 "name": "raid_bdev1", 00:17:06.909 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:06.909 "strip_size_kb": 64, 00:17:06.909 "state": "online", 00:17:06.909 "raid_level": "raid5f", 00:17:06.909 "superblock": true, 00:17:06.909 "num_base_bdevs": 4, 00:17:06.909 "num_base_bdevs_discovered": 4, 00:17:06.909 "num_base_bdevs_operational": 4, 00:17:06.909 "process": { 00:17:06.909 "type": "rebuild", 00:17:06.909 "target": "spare", 00:17:06.909 "progress": { 00:17:06.909 "blocks": 19200, 00:17:06.909 "percent": 10 00:17:06.909 } 00:17:06.909 }, 00:17:06.909 "base_bdevs_list": [ 00:17:06.909 { 00:17:06.909 "name": "spare", 00:17:06.909 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:17:06.909 "is_configured": true, 00:17:06.909 "data_offset": 2048, 00:17:06.909 "data_size": 63488 00:17:06.909 }, 00:17:06.909 { 00:17:06.909 "name": "BaseBdev2", 00:17:06.909 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:06.909 "is_configured": true, 00:17:06.909 "data_offset": 2048, 00:17:06.909 "data_size": 63488 00:17:06.909 }, 00:17:06.909 { 00:17:06.909 "name": "BaseBdev3", 00:17:06.909 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:06.909 "is_configured": true, 00:17:06.909 "data_offset": 2048, 00:17:06.909 "data_size": 63488 00:17:06.909 }, 00:17:06.909 { 00:17:06.909 "name": "BaseBdev4", 00:17:06.909 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:06.909 "is_configured": true, 00:17:06.909 "data_offset": 2048, 00:17:06.909 "data_size": 63488 00:17:06.909 } 00:17:06.909 ] 00:17:06.909 }' 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.909 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.909 [2024-10-05 08:53:43.347317] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.169 [2024-10-05 08:53:43.398263] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:07.169 [2024-10-05 08:53:43.398375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.169 [2024-10-05 08:53:43.398414] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.169 [2024-10-05 08:53:43.398438] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.169 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.169 "name": "raid_bdev1", 00:17:07.169 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:07.169 "strip_size_kb": 64, 00:17:07.169 "state": "online", 00:17:07.169 "raid_level": "raid5f", 00:17:07.169 "superblock": true, 00:17:07.170 "num_base_bdevs": 4, 00:17:07.170 "num_base_bdevs_discovered": 3, 00:17:07.170 "num_base_bdevs_operational": 3, 00:17:07.170 "base_bdevs_list": [ 00:17:07.170 { 00:17:07.170 "name": null, 00:17:07.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.170 "is_configured": false, 00:17:07.170 "data_offset": 0, 00:17:07.170 "data_size": 63488 00:17:07.170 }, 00:17:07.170 { 00:17:07.170 "name": "BaseBdev2", 00:17:07.170 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:07.170 "is_configured": true, 00:17:07.170 "data_offset": 2048, 00:17:07.170 "data_size": 63488 00:17:07.170 }, 00:17:07.170 { 00:17:07.170 "name": "BaseBdev3", 00:17:07.170 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:07.170 "is_configured": true, 00:17:07.170 "data_offset": 2048, 00:17:07.170 "data_size": 63488 00:17:07.170 }, 00:17:07.170 { 00:17:07.170 "name": "BaseBdev4", 00:17:07.170 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:07.170 "is_configured": true, 00:17:07.170 "data_offset": 2048, 00:17:07.170 "data_size": 63488 00:17:07.170 } 00:17:07.170 ] 00:17:07.170 }' 00:17:07.170 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.170 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.739 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:07.739 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.739 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.739 [2024-10-05 08:53:43.912962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:07.739 [2024-10-05 08:53:43.913077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.739 [2024-10-05 08:53:43.913119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:07.739 [2024-10-05 08:53:43.913175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.739 [2024-10-05 08:53:43.913658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.739 [2024-10-05 08:53:43.913723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:07.739 [2024-10-05 08:53:43.913839] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:07.739 [2024-10-05 08:53:43.913883] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:07.740 [2024-10-05 08:53:43.913924] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:07.740 [2024-10-05 08:53:43.914009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.740 [2024-10-05 08:53:43.927793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:07.740 spare 00:17:07.740 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.740 08:53:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:07.740 [2024-10-05 08:53:43.936208] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.679 08:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.679 08:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.679 08:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.679 08:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.679 08:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.679 08:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.679 08:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.679 08:53:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.679 08:53:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.679 08:53:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.679 08:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.679 "name": "raid_bdev1", 00:17:08.679 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:08.679 "strip_size_kb": 64, 00:17:08.679 "state": "online", 00:17:08.679 "raid_level": "raid5f", 00:17:08.679 "superblock": true, 00:17:08.679 "num_base_bdevs": 4, 00:17:08.679 "num_base_bdevs_discovered": 4, 00:17:08.679 "num_base_bdevs_operational": 4, 00:17:08.679 "process": { 00:17:08.679 "type": "rebuild", 00:17:08.679 "target": "spare", 00:17:08.679 "progress": { 00:17:08.679 "blocks": 19200, 00:17:08.679 "percent": 10 00:17:08.679 } 00:17:08.679 }, 00:17:08.679 "base_bdevs_list": [ 00:17:08.679 { 00:17:08.679 "name": "spare", 00:17:08.679 "uuid": "479ba1ed-7b57-5c05-88de-9a14f92abba6", 00:17:08.679 "is_configured": true, 00:17:08.679 "data_offset": 2048, 00:17:08.679 "data_size": 63488 00:17:08.679 }, 00:17:08.679 { 00:17:08.679 "name": "BaseBdev2", 00:17:08.679 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:08.679 "is_configured": true, 00:17:08.679 "data_offset": 2048, 00:17:08.679 "data_size": 63488 00:17:08.679 }, 00:17:08.679 { 00:17:08.679 "name": "BaseBdev3", 00:17:08.679 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:08.679 "is_configured": true, 00:17:08.679 "data_offset": 2048, 00:17:08.679 "data_size": 63488 00:17:08.679 }, 00:17:08.679 { 00:17:08.679 "name": "BaseBdev4", 00:17:08.679 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:08.679 "is_configured": true, 00:17:08.679 "data_offset": 2048, 00:17:08.679 "data_size": 63488 00:17:08.679 } 00:17:08.679 ] 00:17:08.679 }' 00:17:08.679 08:53:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.679 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.679 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.679 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.679 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:08.679 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.679 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.679 [2024-10-05 08:53:45.095029] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.679 [2024-10-05 08:53:45.141913] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:08.679 [2024-10-05 08:53:45.142025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.679 [2024-10-05 08:53:45.142046] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.679 [2024-10-05 08:53:45.142054] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.939 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.939 "name": "raid_bdev1", 00:17:08.939 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:08.939 "strip_size_kb": 64, 00:17:08.939 "state": "online", 00:17:08.939 "raid_level": "raid5f", 00:17:08.939 "superblock": true, 00:17:08.939 "num_base_bdevs": 4, 00:17:08.939 "num_base_bdevs_discovered": 3, 00:17:08.939 "num_base_bdevs_operational": 3, 00:17:08.939 "base_bdevs_list": [ 00:17:08.939 { 00:17:08.939 "name": null, 00:17:08.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.939 "is_configured": false, 00:17:08.939 "data_offset": 0, 00:17:08.939 "data_size": 63488 00:17:08.939 }, 00:17:08.939 { 00:17:08.939 "name": "BaseBdev2", 00:17:08.939 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:08.939 "is_configured": true, 00:17:08.939 "data_offset": 2048, 00:17:08.939 "data_size": 63488 00:17:08.939 }, 00:17:08.939 { 00:17:08.939 "name": "BaseBdev3", 00:17:08.940 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:08.940 "is_configured": true, 00:17:08.940 "data_offset": 2048, 00:17:08.940 "data_size": 63488 00:17:08.940 }, 00:17:08.940 { 00:17:08.940 "name": "BaseBdev4", 00:17:08.940 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:08.940 "is_configured": true, 00:17:08.940 "data_offset": 2048, 00:17:08.940 "data_size": 63488 00:17:08.940 } 00:17:08.940 ] 00:17:08.940 }' 00:17:08.940 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.940 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.200 "name": "raid_bdev1", 00:17:09.200 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:09.200 "strip_size_kb": 64, 00:17:09.200 "state": "online", 00:17:09.200 "raid_level": "raid5f", 00:17:09.200 "superblock": true, 00:17:09.200 "num_base_bdevs": 4, 00:17:09.200 "num_base_bdevs_discovered": 3, 00:17:09.200 "num_base_bdevs_operational": 3, 00:17:09.200 "base_bdevs_list": [ 00:17:09.200 { 00:17:09.200 "name": null, 00:17:09.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.200 "is_configured": false, 00:17:09.200 "data_offset": 0, 00:17:09.200 "data_size": 63488 00:17:09.200 }, 00:17:09.200 { 00:17:09.200 "name": "BaseBdev2", 00:17:09.200 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:09.200 "is_configured": true, 00:17:09.200 "data_offset": 2048, 00:17:09.200 "data_size": 63488 00:17:09.200 }, 00:17:09.200 { 00:17:09.200 "name": "BaseBdev3", 00:17:09.200 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:09.200 "is_configured": true, 00:17:09.200 "data_offset": 2048, 00:17:09.200 "data_size": 63488 00:17:09.200 }, 00:17:09.200 { 00:17:09.200 "name": "BaseBdev4", 00:17:09.200 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:09.200 "is_configured": true, 00:17:09.200 "data_offset": 2048, 00:17:09.200 "data_size": 63488 00:17:09.200 } 00:17:09.200 ] 00:17:09.200 }' 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.200 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.464 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.464 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:09.464 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.464 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.464 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.464 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:09.464 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.464 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.464 [2024-10-05 08:53:45.705224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:09.464 [2024-10-05 08:53:45.705278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.464 [2024-10-05 08:53:45.705300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:09.464 [2024-10-05 08:53:45.705311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.464 [2024-10-05 08:53:45.705752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.464 [2024-10-05 08:53:45.705769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:09.464 [2024-10-05 08:53:45.705839] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:09.464 [2024-10-05 08:53:45.705851] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:09.464 [2024-10-05 08:53:45.705860] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:09.464 [2024-10-05 08:53:45.705871] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:09.464 BaseBdev1 00:17:09.464 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.464 08:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.434 "name": "raid_bdev1", 00:17:10.434 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:10.434 "strip_size_kb": 64, 00:17:10.434 "state": "online", 00:17:10.434 "raid_level": "raid5f", 00:17:10.434 "superblock": true, 00:17:10.434 "num_base_bdevs": 4, 00:17:10.434 "num_base_bdevs_discovered": 3, 00:17:10.434 "num_base_bdevs_operational": 3, 00:17:10.434 "base_bdevs_list": [ 00:17:10.434 { 00:17:10.434 "name": null, 00:17:10.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.434 "is_configured": false, 00:17:10.434 "data_offset": 0, 00:17:10.434 "data_size": 63488 00:17:10.434 }, 00:17:10.434 { 00:17:10.434 "name": "BaseBdev2", 00:17:10.434 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:10.434 "is_configured": true, 00:17:10.434 "data_offset": 2048, 00:17:10.434 "data_size": 63488 00:17:10.434 }, 00:17:10.434 { 00:17:10.434 "name": "BaseBdev3", 00:17:10.434 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:10.434 "is_configured": true, 00:17:10.434 "data_offset": 2048, 00:17:10.434 "data_size": 63488 00:17:10.434 }, 00:17:10.434 { 00:17:10.434 "name": "BaseBdev4", 00:17:10.434 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:10.434 "is_configured": true, 00:17:10.434 "data_offset": 2048, 00:17:10.434 "data_size": 63488 00:17:10.434 } 00:17:10.434 ] 00:17:10.434 }' 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.434 08:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.694 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.694 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.694 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.694 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.694 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.694 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.694 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.694 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.694 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.694 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.694 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.694 "name": "raid_bdev1", 00:17:10.694 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:10.694 "strip_size_kb": 64, 00:17:10.694 "state": "online", 00:17:10.694 "raid_level": "raid5f", 00:17:10.694 "superblock": true, 00:17:10.694 "num_base_bdevs": 4, 00:17:10.694 "num_base_bdevs_discovered": 3, 00:17:10.694 "num_base_bdevs_operational": 3, 00:17:10.694 "base_bdevs_list": [ 00:17:10.694 { 00:17:10.694 "name": null, 00:17:10.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.694 "is_configured": false, 00:17:10.694 "data_offset": 0, 00:17:10.694 "data_size": 63488 00:17:10.694 }, 00:17:10.694 { 00:17:10.694 "name": "BaseBdev2", 00:17:10.694 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:10.694 "is_configured": true, 00:17:10.694 "data_offset": 2048, 00:17:10.694 "data_size": 63488 00:17:10.694 }, 00:17:10.694 { 00:17:10.694 "name": "BaseBdev3", 00:17:10.694 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:10.694 "is_configured": true, 00:17:10.694 "data_offset": 2048, 00:17:10.694 "data_size": 63488 00:17:10.694 }, 00:17:10.694 { 00:17:10.694 "name": "BaseBdev4", 00:17:10.694 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:10.694 "is_configured": true, 00:17:10.694 "data_offset": 2048, 00:17:10.694 "data_size": 63488 00:17:10.694 } 00:17:10.694 ] 00:17:10.694 }' 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.954 [2024-10-05 08:53:47.254627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.954 [2024-10-05 08:53:47.254833] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:10.954 [2024-10-05 08:53:47.254902] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:10.954 request: 00:17:10.954 { 00:17:10.954 "base_bdev": "BaseBdev1", 00:17:10.954 "raid_bdev": "raid_bdev1", 00:17:10.954 "method": "bdev_raid_add_base_bdev", 00:17:10.954 "req_id": 1 00:17:10.954 } 00:17:10.954 Got JSON-RPC error response 00:17:10.954 response: 00:17:10.954 { 00:17:10.954 "code": -22, 00:17:10.954 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:10.954 } 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:10.954 08:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.894 "name": "raid_bdev1", 00:17:11.894 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:11.894 "strip_size_kb": 64, 00:17:11.894 "state": "online", 00:17:11.894 "raid_level": "raid5f", 00:17:11.894 "superblock": true, 00:17:11.894 "num_base_bdevs": 4, 00:17:11.894 "num_base_bdevs_discovered": 3, 00:17:11.894 "num_base_bdevs_operational": 3, 00:17:11.894 "base_bdevs_list": [ 00:17:11.894 { 00:17:11.894 "name": null, 00:17:11.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.894 "is_configured": false, 00:17:11.894 "data_offset": 0, 00:17:11.894 "data_size": 63488 00:17:11.894 }, 00:17:11.894 { 00:17:11.894 "name": "BaseBdev2", 00:17:11.894 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:11.894 "is_configured": true, 00:17:11.894 "data_offset": 2048, 00:17:11.894 "data_size": 63488 00:17:11.894 }, 00:17:11.894 { 00:17:11.894 "name": "BaseBdev3", 00:17:11.894 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:11.894 "is_configured": true, 00:17:11.894 "data_offset": 2048, 00:17:11.894 "data_size": 63488 00:17:11.894 }, 00:17:11.894 { 00:17:11.894 "name": "BaseBdev4", 00:17:11.894 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:11.894 "is_configured": true, 00:17:11.894 "data_offset": 2048, 00:17:11.894 "data_size": 63488 00:17:11.894 } 00:17:11.894 ] 00:17:11.894 }' 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.894 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.462 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.462 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.462 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.462 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.462 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.462 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.462 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.462 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.462 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.462 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.462 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.462 "name": "raid_bdev1", 00:17:12.462 "uuid": "ccae6fe4-a9da-470b-9ffb-b6bd0ffbf10a", 00:17:12.462 "strip_size_kb": 64, 00:17:12.462 "state": "online", 00:17:12.462 "raid_level": "raid5f", 00:17:12.462 "superblock": true, 00:17:12.462 "num_base_bdevs": 4, 00:17:12.462 "num_base_bdevs_discovered": 3, 00:17:12.462 "num_base_bdevs_operational": 3, 00:17:12.462 "base_bdevs_list": [ 00:17:12.462 { 00:17:12.462 "name": null, 00:17:12.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.462 "is_configured": false, 00:17:12.462 "data_offset": 0, 00:17:12.462 "data_size": 63488 00:17:12.462 }, 00:17:12.462 { 00:17:12.462 "name": "BaseBdev2", 00:17:12.462 "uuid": "e6a692dd-8a19-5a78-9376-626f9467d6b9", 00:17:12.462 "is_configured": true, 00:17:12.462 "data_offset": 2048, 00:17:12.462 "data_size": 63488 00:17:12.462 }, 00:17:12.463 { 00:17:12.463 "name": "BaseBdev3", 00:17:12.463 "uuid": "d04fb02a-9355-53f6-a03c-f344064019c8", 00:17:12.463 "is_configured": true, 00:17:12.463 "data_offset": 2048, 00:17:12.463 "data_size": 63488 00:17:12.463 }, 00:17:12.463 { 00:17:12.463 "name": "BaseBdev4", 00:17:12.463 "uuid": "bbca4ed5-c55b-534a-b9c1-77b41cda503f", 00:17:12.463 "is_configured": true, 00:17:12.463 "data_offset": 2048, 00:17:12.463 "data_size": 63488 00:17:12.463 } 00:17:12.463 ] 00:17:12.463 }' 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81667 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81667 ']' 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 81667 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81667 00:17:12.463 killing process with pid 81667 00:17:12.463 Received shutdown signal, test time was about 60.000000 seconds 00:17:12.463 00:17:12.463 Latency(us) 00:17:12.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.463 =================================================================================================================== 00:17:12.463 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81667' 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 81667 00:17:12.463 [2024-10-05 08:53:48.863761] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:12.463 [2024-10-05 08:53:48.863885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.463 08:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 81667 00:17:12.463 [2024-10-05 08:53:48.863969] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.463 [2024-10-05 08:53:48.863982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:13.032 [2024-10-05 08:53:49.323324] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:14.414 08:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:14.414 ************************************ 00:17:14.414 END TEST raid5f_rebuild_test_sb 00:17:14.414 ************************************ 00:17:14.414 00:17:14.414 real 0m26.958s 00:17:14.414 user 0m33.687s 00:17:14.414 sys 0m3.232s 00:17:14.414 08:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:14.414 08:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.414 08:53:50 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:14.414 08:53:50 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:14.414 08:53:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:14.414 08:53:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:14.414 08:53:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:14.414 ************************************ 00:17:14.414 START TEST raid_state_function_test_sb_4k 00:17:14.414 ************************************ 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=82324 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82324' 00:17:14.414 Process raid pid: 82324 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 82324 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 82324 ']' 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.414 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.415 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.415 08:53:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.415 [2024-10-05 08:53:50.678106] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:14.415 [2024-10-05 08:53:50.678224] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.415 [2024-10-05 08:53:50.843181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.675 [2024-10-05 08:53:51.038802] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.934 [2024-10-05 08:53:51.229999] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.934 [2024-10-05 08:53:51.230030] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.195 [2024-10-05 08:53:51.489296] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:15.195 [2024-10-05 08:53:51.489349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:15.195 [2024-10-05 08:53:51.489359] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:15.195 [2024-10-05 08:53:51.489368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.195 "name": "Existed_Raid", 00:17:15.195 "uuid": "c96c9476-58ff-40eb-a38e-5415fd021f6d", 00:17:15.195 "strip_size_kb": 0, 00:17:15.195 "state": "configuring", 00:17:15.195 "raid_level": "raid1", 00:17:15.195 "superblock": true, 00:17:15.195 "num_base_bdevs": 2, 00:17:15.195 "num_base_bdevs_discovered": 0, 00:17:15.195 "num_base_bdevs_operational": 2, 00:17:15.195 "base_bdevs_list": [ 00:17:15.195 { 00:17:15.195 "name": "BaseBdev1", 00:17:15.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.195 "is_configured": false, 00:17:15.195 "data_offset": 0, 00:17:15.195 "data_size": 0 00:17:15.195 }, 00:17:15.195 { 00:17:15.195 "name": "BaseBdev2", 00:17:15.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.195 "is_configured": false, 00:17:15.195 "data_offset": 0, 00:17:15.195 "data_size": 0 00:17:15.195 } 00:17:15.195 ] 00:17:15.195 }' 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.195 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.765 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:15.765 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.765 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.765 [2024-10-05 08:53:51.960342] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.765 [2024-10-05 08:53:51.960421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:15.765 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.765 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:15.765 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.765 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.765 [2024-10-05 08:53:51.972346] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:15.765 [2024-10-05 08:53:51.972419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:15.765 [2024-10-05 08:53:51.972444] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:15.765 [2024-10-05 08:53:51.972467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:15.765 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.765 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:15.765 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.765 08:53:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.765 [2024-10-05 08:53:52.048162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:15.765 BaseBdev1 00:17:15.765 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.765 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:15.765 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:15.765 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:15.765 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:15.765 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:15.765 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:15.765 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.766 [ 00:17:15.766 { 00:17:15.766 "name": "BaseBdev1", 00:17:15.766 "aliases": [ 00:17:15.766 "fe82a0eb-b27e-4d33-98ae-9a84788ba4db" 00:17:15.766 ], 00:17:15.766 "product_name": "Malloc disk", 00:17:15.766 "block_size": 4096, 00:17:15.766 "num_blocks": 8192, 00:17:15.766 "uuid": "fe82a0eb-b27e-4d33-98ae-9a84788ba4db", 00:17:15.766 "assigned_rate_limits": { 00:17:15.766 "rw_ios_per_sec": 0, 00:17:15.766 "rw_mbytes_per_sec": 0, 00:17:15.766 "r_mbytes_per_sec": 0, 00:17:15.766 "w_mbytes_per_sec": 0 00:17:15.766 }, 00:17:15.766 "claimed": true, 00:17:15.766 "claim_type": "exclusive_write", 00:17:15.766 "zoned": false, 00:17:15.766 "supported_io_types": { 00:17:15.766 "read": true, 00:17:15.766 "write": true, 00:17:15.766 "unmap": true, 00:17:15.766 "flush": true, 00:17:15.766 "reset": true, 00:17:15.766 "nvme_admin": false, 00:17:15.766 "nvme_io": false, 00:17:15.766 "nvme_io_md": false, 00:17:15.766 "write_zeroes": true, 00:17:15.766 "zcopy": true, 00:17:15.766 "get_zone_info": false, 00:17:15.766 "zone_management": false, 00:17:15.766 "zone_append": false, 00:17:15.766 "compare": false, 00:17:15.766 "compare_and_write": false, 00:17:15.766 "abort": true, 00:17:15.766 "seek_hole": false, 00:17:15.766 "seek_data": false, 00:17:15.766 "copy": true, 00:17:15.766 "nvme_iov_md": false 00:17:15.766 }, 00:17:15.766 "memory_domains": [ 00:17:15.766 { 00:17:15.766 "dma_device_id": "system", 00:17:15.766 "dma_device_type": 1 00:17:15.766 }, 00:17:15.766 { 00:17:15.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.766 "dma_device_type": 2 00:17:15.766 } 00:17:15.766 ], 00:17:15.766 "driver_specific": {} 00:17:15.766 } 00:17:15.766 ] 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.766 "name": "Existed_Raid", 00:17:15.766 "uuid": "d55180e2-1462-4baf-bd15-6ab869f97653", 00:17:15.766 "strip_size_kb": 0, 00:17:15.766 "state": "configuring", 00:17:15.766 "raid_level": "raid1", 00:17:15.766 "superblock": true, 00:17:15.766 "num_base_bdevs": 2, 00:17:15.766 "num_base_bdevs_discovered": 1, 00:17:15.766 "num_base_bdevs_operational": 2, 00:17:15.766 "base_bdevs_list": [ 00:17:15.766 { 00:17:15.766 "name": "BaseBdev1", 00:17:15.766 "uuid": "fe82a0eb-b27e-4d33-98ae-9a84788ba4db", 00:17:15.766 "is_configured": true, 00:17:15.766 "data_offset": 256, 00:17:15.766 "data_size": 7936 00:17:15.766 }, 00:17:15.766 { 00:17:15.766 "name": "BaseBdev2", 00:17:15.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.766 "is_configured": false, 00:17:15.766 "data_offset": 0, 00:17:15.766 "data_size": 0 00:17:15.766 } 00:17:15.766 ] 00:17:15.766 }' 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.766 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.336 [2024-10-05 08:53:52.571280] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:16.336 [2024-10-05 08:53:52.571325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.336 [2024-10-05 08:53:52.583296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.336 [2024-10-05 08:53:52.585131] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:16.336 [2024-10-05 08:53:52.585197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.336 "name": "Existed_Raid", 00:17:16.336 "uuid": "b22d6172-8b64-4f42-a433-591e024537d1", 00:17:16.336 "strip_size_kb": 0, 00:17:16.336 "state": "configuring", 00:17:16.336 "raid_level": "raid1", 00:17:16.336 "superblock": true, 00:17:16.336 "num_base_bdevs": 2, 00:17:16.336 "num_base_bdevs_discovered": 1, 00:17:16.336 "num_base_bdevs_operational": 2, 00:17:16.336 "base_bdevs_list": [ 00:17:16.336 { 00:17:16.336 "name": "BaseBdev1", 00:17:16.336 "uuid": "fe82a0eb-b27e-4d33-98ae-9a84788ba4db", 00:17:16.336 "is_configured": true, 00:17:16.336 "data_offset": 256, 00:17:16.336 "data_size": 7936 00:17:16.336 }, 00:17:16.336 { 00:17:16.336 "name": "BaseBdev2", 00:17:16.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.336 "is_configured": false, 00:17:16.336 "data_offset": 0, 00:17:16.336 "data_size": 0 00:17:16.336 } 00:17:16.336 ] 00:17:16.336 }' 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.336 08:53:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.596 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:16.596 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.596 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.856 [2024-10-05 08:53:53.077695] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:16.856 [2024-10-05 08:53:53.078037] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:16.856 [2024-10-05 08:53:53.078095] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:16.856 [2024-10-05 08:53:53.078373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:16.856 [2024-10-05 08:53:53.078571] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:16.856 BaseBdev2 00:17:16.856 [2024-10-05 08:53:53.078617] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:16.856 [2024-10-05 08:53:53.078802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.856 [ 00:17:16.856 { 00:17:16.856 "name": "BaseBdev2", 00:17:16.856 "aliases": [ 00:17:16.856 "5fc69c22-7bbf-4913-b2fb-003851b34b3d" 00:17:16.856 ], 00:17:16.856 "product_name": "Malloc disk", 00:17:16.856 "block_size": 4096, 00:17:16.856 "num_blocks": 8192, 00:17:16.856 "uuid": "5fc69c22-7bbf-4913-b2fb-003851b34b3d", 00:17:16.856 "assigned_rate_limits": { 00:17:16.856 "rw_ios_per_sec": 0, 00:17:16.856 "rw_mbytes_per_sec": 0, 00:17:16.856 "r_mbytes_per_sec": 0, 00:17:16.856 "w_mbytes_per_sec": 0 00:17:16.856 }, 00:17:16.856 "claimed": true, 00:17:16.856 "claim_type": "exclusive_write", 00:17:16.856 "zoned": false, 00:17:16.856 "supported_io_types": { 00:17:16.856 "read": true, 00:17:16.856 "write": true, 00:17:16.856 "unmap": true, 00:17:16.856 "flush": true, 00:17:16.856 "reset": true, 00:17:16.856 "nvme_admin": false, 00:17:16.856 "nvme_io": false, 00:17:16.856 "nvme_io_md": false, 00:17:16.856 "write_zeroes": true, 00:17:16.856 "zcopy": true, 00:17:16.856 "get_zone_info": false, 00:17:16.856 "zone_management": false, 00:17:16.856 "zone_append": false, 00:17:16.856 "compare": false, 00:17:16.856 "compare_and_write": false, 00:17:16.856 "abort": true, 00:17:16.856 "seek_hole": false, 00:17:16.856 "seek_data": false, 00:17:16.856 "copy": true, 00:17:16.856 "nvme_iov_md": false 00:17:16.856 }, 00:17:16.856 "memory_domains": [ 00:17:16.856 { 00:17:16.856 "dma_device_id": "system", 00:17:16.856 "dma_device_type": 1 00:17:16.856 }, 00:17:16.856 { 00:17:16.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.856 "dma_device_type": 2 00:17:16.856 } 00:17:16.856 ], 00:17:16.856 "driver_specific": {} 00:17:16.856 } 00:17:16.856 ] 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.856 "name": "Existed_Raid", 00:17:16.856 "uuid": "b22d6172-8b64-4f42-a433-591e024537d1", 00:17:16.856 "strip_size_kb": 0, 00:17:16.856 "state": "online", 00:17:16.856 "raid_level": "raid1", 00:17:16.856 "superblock": true, 00:17:16.856 "num_base_bdevs": 2, 00:17:16.856 "num_base_bdevs_discovered": 2, 00:17:16.856 "num_base_bdevs_operational": 2, 00:17:16.856 "base_bdevs_list": [ 00:17:16.856 { 00:17:16.856 "name": "BaseBdev1", 00:17:16.856 "uuid": "fe82a0eb-b27e-4d33-98ae-9a84788ba4db", 00:17:16.856 "is_configured": true, 00:17:16.856 "data_offset": 256, 00:17:16.856 "data_size": 7936 00:17:16.856 }, 00:17:16.856 { 00:17:16.856 "name": "BaseBdev2", 00:17:16.856 "uuid": "5fc69c22-7bbf-4913-b2fb-003851b34b3d", 00:17:16.856 "is_configured": true, 00:17:16.856 "data_offset": 256, 00:17:16.856 "data_size": 7936 00:17:16.856 } 00:17:16.856 ] 00:17:16.856 }' 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.856 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.117 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:17.117 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:17.117 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:17.117 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:17.117 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:17.117 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:17.117 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:17.117 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:17.117 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.117 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.117 [2024-10-05 08:53:53.585226] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:17.378 "name": "Existed_Raid", 00:17:17.378 "aliases": [ 00:17:17.378 "b22d6172-8b64-4f42-a433-591e024537d1" 00:17:17.378 ], 00:17:17.378 "product_name": "Raid Volume", 00:17:17.378 "block_size": 4096, 00:17:17.378 "num_blocks": 7936, 00:17:17.378 "uuid": "b22d6172-8b64-4f42-a433-591e024537d1", 00:17:17.378 "assigned_rate_limits": { 00:17:17.378 "rw_ios_per_sec": 0, 00:17:17.378 "rw_mbytes_per_sec": 0, 00:17:17.378 "r_mbytes_per_sec": 0, 00:17:17.378 "w_mbytes_per_sec": 0 00:17:17.378 }, 00:17:17.378 "claimed": false, 00:17:17.378 "zoned": false, 00:17:17.378 "supported_io_types": { 00:17:17.378 "read": true, 00:17:17.378 "write": true, 00:17:17.378 "unmap": false, 00:17:17.378 "flush": false, 00:17:17.378 "reset": true, 00:17:17.378 "nvme_admin": false, 00:17:17.378 "nvme_io": false, 00:17:17.378 "nvme_io_md": false, 00:17:17.378 "write_zeroes": true, 00:17:17.378 "zcopy": false, 00:17:17.378 "get_zone_info": false, 00:17:17.378 "zone_management": false, 00:17:17.378 "zone_append": false, 00:17:17.378 "compare": false, 00:17:17.378 "compare_and_write": false, 00:17:17.378 "abort": false, 00:17:17.378 "seek_hole": false, 00:17:17.378 "seek_data": false, 00:17:17.378 "copy": false, 00:17:17.378 "nvme_iov_md": false 00:17:17.378 }, 00:17:17.378 "memory_domains": [ 00:17:17.378 { 00:17:17.378 "dma_device_id": "system", 00:17:17.378 "dma_device_type": 1 00:17:17.378 }, 00:17:17.378 { 00:17:17.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.378 "dma_device_type": 2 00:17:17.378 }, 00:17:17.378 { 00:17:17.378 "dma_device_id": "system", 00:17:17.378 "dma_device_type": 1 00:17:17.378 }, 00:17:17.378 { 00:17:17.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.378 "dma_device_type": 2 00:17:17.378 } 00:17:17.378 ], 00:17:17.378 "driver_specific": { 00:17:17.378 "raid": { 00:17:17.378 "uuid": "b22d6172-8b64-4f42-a433-591e024537d1", 00:17:17.378 "strip_size_kb": 0, 00:17:17.378 "state": "online", 00:17:17.378 "raid_level": "raid1", 00:17:17.378 "superblock": true, 00:17:17.378 "num_base_bdevs": 2, 00:17:17.378 "num_base_bdevs_discovered": 2, 00:17:17.378 "num_base_bdevs_operational": 2, 00:17:17.378 "base_bdevs_list": [ 00:17:17.378 { 00:17:17.378 "name": "BaseBdev1", 00:17:17.378 "uuid": "fe82a0eb-b27e-4d33-98ae-9a84788ba4db", 00:17:17.378 "is_configured": true, 00:17:17.378 "data_offset": 256, 00:17:17.378 "data_size": 7936 00:17:17.378 }, 00:17:17.378 { 00:17:17.378 "name": "BaseBdev2", 00:17:17.378 "uuid": "5fc69c22-7bbf-4913-b2fb-003851b34b3d", 00:17:17.378 "is_configured": true, 00:17:17.378 "data_offset": 256, 00:17:17.378 "data_size": 7936 00:17:17.378 } 00:17:17.378 ] 00:17:17.378 } 00:17:17.378 } 00:17:17.378 }' 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:17.378 BaseBdev2' 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:17.378 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:17.379 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.379 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:17.379 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.379 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.379 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.379 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.379 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:17.379 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:17.379 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:17.379 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.379 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.379 [2024-10-05 08:53:53.792606] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.639 "name": "Existed_Raid", 00:17:17.639 "uuid": "b22d6172-8b64-4f42-a433-591e024537d1", 00:17:17.639 "strip_size_kb": 0, 00:17:17.639 "state": "online", 00:17:17.639 "raid_level": "raid1", 00:17:17.639 "superblock": true, 00:17:17.639 "num_base_bdevs": 2, 00:17:17.639 "num_base_bdevs_discovered": 1, 00:17:17.639 "num_base_bdevs_operational": 1, 00:17:17.639 "base_bdevs_list": [ 00:17:17.639 { 00:17:17.639 "name": null, 00:17:17.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.639 "is_configured": false, 00:17:17.639 "data_offset": 0, 00:17:17.639 "data_size": 7936 00:17:17.639 }, 00:17:17.639 { 00:17:17.639 "name": "BaseBdev2", 00:17:17.639 "uuid": "5fc69c22-7bbf-4913-b2fb-003851b34b3d", 00:17:17.639 "is_configured": true, 00:17:17.639 "data_offset": 256, 00:17:17.639 "data_size": 7936 00:17:17.639 } 00:17:17.639 ] 00:17:17.639 }' 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.639 08:53:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.899 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:17.899 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:17.899 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.899 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.899 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.899 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:17.899 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.159 [2024-10-05 08:53:54.397010] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:18.159 [2024-10-05 08:53:54.397109] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.159 [2024-10-05 08:53:54.485997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.159 [2024-10-05 08:53:54.486046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.159 [2024-10-05 08:53:54.486058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 82324 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 82324 ']' 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 82324 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82324 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:18.159 killing process with pid 82324 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82324' 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 82324 00:17:18.159 [2024-10-05 08:53:54.582787] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:18.159 08:53:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 82324 00:17:18.159 [2024-10-05 08:53:54.599646] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:19.540 ************************************ 00:17:19.540 END TEST raid_state_function_test_sb_4k 00:17:19.540 08:53:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:19.540 00:17:19.540 real 0m5.203s 00:17:19.540 user 0m7.442s 00:17:19.540 sys 0m0.922s 00:17:19.540 08:53:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:19.540 08:53:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.540 ************************************ 00:17:19.540 08:53:55 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:19.540 08:53:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:19.540 08:53:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:19.540 08:53:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.540 ************************************ 00:17:19.540 START TEST raid_superblock_test_4k 00:17:19.540 ************************************ 00:17:19.540 08:53:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:19.540 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:19.540 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:19.540 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:19.540 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:19.540 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:19.540 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:19.540 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:19.540 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:19.540 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=82546 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 82546 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 82546 ']' 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:19.541 08:53:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.541 [2024-10-05 08:53:55.947140] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:19.541 [2024-10-05 08:53:55.947360] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82546 ] 00:17:19.800 [2024-10-05 08:53:56.109827] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.060 [2024-10-05 08:53:56.307479] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.060 [2024-10-05 08:53:56.495103] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.060 [2024-10-05 08:53:56.495235] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.320 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.580 malloc1 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.580 [2024-10-05 08:53:56.801589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.580 [2024-10-05 08:53:56.801697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.580 [2024-10-05 08:53:56.801749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:20.580 [2024-10-05 08:53:56.801783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.580 [2024-10-05 08:53:56.803956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.580 [2024-10-05 08:53:56.804034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.580 pt1 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.580 malloc2 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.580 [2024-10-05 08:53:56.864931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.580 [2024-10-05 08:53:56.864994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.580 [2024-10-05 08:53:56.865030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:20.580 [2024-10-05 08:53:56.865039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.580 [2024-10-05 08:53:56.867065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.580 [2024-10-05 08:53:56.867108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.580 pt2 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.580 [2024-10-05 08:53:56.876994] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:20.580 [2024-10-05 08:53:56.878763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.580 [2024-10-05 08:53:56.878933] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:20.580 [2024-10-05 08:53:56.878946] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.580 [2024-10-05 08:53:56.879175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:20.580 [2024-10-05 08:53:56.879333] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:20.580 [2024-10-05 08:53:56.879346] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:20.580 [2024-10-05 08:53:56.879479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.580 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.580 "name": "raid_bdev1", 00:17:20.581 "uuid": "49944ea9-930a-4df9-8bb2-5ed985965b64", 00:17:20.581 "strip_size_kb": 0, 00:17:20.581 "state": "online", 00:17:20.581 "raid_level": "raid1", 00:17:20.581 "superblock": true, 00:17:20.581 "num_base_bdevs": 2, 00:17:20.581 "num_base_bdevs_discovered": 2, 00:17:20.581 "num_base_bdevs_operational": 2, 00:17:20.581 "base_bdevs_list": [ 00:17:20.581 { 00:17:20.581 "name": "pt1", 00:17:20.581 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.581 "is_configured": true, 00:17:20.581 "data_offset": 256, 00:17:20.581 "data_size": 7936 00:17:20.581 }, 00:17:20.581 { 00:17:20.581 "name": "pt2", 00:17:20.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.581 "is_configured": true, 00:17:20.581 "data_offset": 256, 00:17:20.581 "data_size": 7936 00:17:20.581 } 00:17:20.581 ] 00:17:20.581 }' 00:17:20.581 08:53:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.581 08:53:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.840 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:20.840 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:20.840 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:20.840 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:20.840 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:20.840 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:20.840 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:20.840 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:20.840 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.840 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.100 [2024-10-05 08:53:57.316390] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.100 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.100 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:21.100 "name": "raid_bdev1", 00:17:21.100 "aliases": [ 00:17:21.100 "49944ea9-930a-4df9-8bb2-5ed985965b64" 00:17:21.100 ], 00:17:21.100 "product_name": "Raid Volume", 00:17:21.100 "block_size": 4096, 00:17:21.100 "num_blocks": 7936, 00:17:21.100 "uuid": "49944ea9-930a-4df9-8bb2-5ed985965b64", 00:17:21.100 "assigned_rate_limits": { 00:17:21.100 "rw_ios_per_sec": 0, 00:17:21.100 "rw_mbytes_per_sec": 0, 00:17:21.100 "r_mbytes_per_sec": 0, 00:17:21.100 "w_mbytes_per_sec": 0 00:17:21.100 }, 00:17:21.100 "claimed": false, 00:17:21.100 "zoned": false, 00:17:21.100 "supported_io_types": { 00:17:21.100 "read": true, 00:17:21.100 "write": true, 00:17:21.100 "unmap": false, 00:17:21.100 "flush": false, 00:17:21.100 "reset": true, 00:17:21.100 "nvme_admin": false, 00:17:21.100 "nvme_io": false, 00:17:21.100 "nvme_io_md": false, 00:17:21.100 "write_zeroes": true, 00:17:21.100 "zcopy": false, 00:17:21.100 "get_zone_info": false, 00:17:21.100 "zone_management": false, 00:17:21.100 "zone_append": false, 00:17:21.100 "compare": false, 00:17:21.100 "compare_and_write": false, 00:17:21.100 "abort": false, 00:17:21.100 "seek_hole": false, 00:17:21.100 "seek_data": false, 00:17:21.100 "copy": false, 00:17:21.101 "nvme_iov_md": false 00:17:21.101 }, 00:17:21.101 "memory_domains": [ 00:17:21.101 { 00:17:21.101 "dma_device_id": "system", 00:17:21.101 "dma_device_type": 1 00:17:21.101 }, 00:17:21.101 { 00:17:21.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.101 "dma_device_type": 2 00:17:21.101 }, 00:17:21.101 { 00:17:21.101 "dma_device_id": "system", 00:17:21.101 "dma_device_type": 1 00:17:21.101 }, 00:17:21.101 { 00:17:21.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.101 "dma_device_type": 2 00:17:21.101 } 00:17:21.101 ], 00:17:21.101 "driver_specific": { 00:17:21.101 "raid": { 00:17:21.101 "uuid": "49944ea9-930a-4df9-8bb2-5ed985965b64", 00:17:21.101 "strip_size_kb": 0, 00:17:21.101 "state": "online", 00:17:21.101 "raid_level": "raid1", 00:17:21.101 "superblock": true, 00:17:21.101 "num_base_bdevs": 2, 00:17:21.101 "num_base_bdevs_discovered": 2, 00:17:21.101 "num_base_bdevs_operational": 2, 00:17:21.101 "base_bdevs_list": [ 00:17:21.101 { 00:17:21.101 "name": "pt1", 00:17:21.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.101 "is_configured": true, 00:17:21.101 "data_offset": 256, 00:17:21.101 "data_size": 7936 00:17:21.101 }, 00:17:21.101 { 00:17:21.101 "name": "pt2", 00:17:21.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.101 "is_configured": true, 00:17:21.101 "data_offset": 256, 00:17:21.101 "data_size": 7936 00:17:21.101 } 00:17:21.101 ] 00:17:21.101 } 00:17:21.101 } 00:17:21.101 }' 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:21.101 pt2' 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:21.101 [2024-10-05 08:53:57.536004] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.101 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=49944ea9-930a-4df9-8bb2-5ed985965b64 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 49944ea9-930a-4df9-8bb2-5ed985965b64 ']' 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.361 [2024-10-05 08:53:57.579693] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.361 [2024-10-05 08:53:57.579715] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.361 [2024-10-05 08:53:57.579780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.361 [2024-10-05 08:53:57.579828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.361 [2024-10-05 08:53:57.579839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.361 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.362 [2024-10-05 08:53:57.711465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:21.362 [2024-10-05 08:53:57.713271] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:21.362 [2024-10-05 08:53:57.713365] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:21.362 [2024-10-05 08:53:57.713459] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:21.362 [2024-10-05 08:53:57.713520] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.362 [2024-10-05 08:53:57.713567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:21.362 request: 00:17:21.362 { 00:17:21.362 "name": "raid_bdev1", 00:17:21.362 "raid_level": "raid1", 00:17:21.362 "base_bdevs": [ 00:17:21.362 "malloc1", 00:17:21.362 "malloc2" 00:17:21.362 ], 00:17:21.362 "superblock": false, 00:17:21.362 "method": "bdev_raid_create", 00:17:21.362 "req_id": 1 00:17:21.362 } 00:17:21.362 Got JSON-RPC error response 00:17:21.362 response: 00:17:21.362 { 00:17:21.362 "code": -17, 00:17:21.362 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:21.362 } 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.362 [2024-10-05 08:53:57.775341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:21.362 [2024-10-05 08:53:57.775387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.362 [2024-10-05 08:53:57.775401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:21.362 [2024-10-05 08:53:57.775410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.362 [2024-10-05 08:53:57.777476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.362 [2024-10-05 08:53:57.777514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:21.362 [2024-10-05 08:53:57.777573] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:21.362 [2024-10-05 08:53:57.777626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:21.362 pt1 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.362 "name": "raid_bdev1", 00:17:21.362 "uuid": "49944ea9-930a-4df9-8bb2-5ed985965b64", 00:17:21.362 "strip_size_kb": 0, 00:17:21.362 "state": "configuring", 00:17:21.362 "raid_level": "raid1", 00:17:21.362 "superblock": true, 00:17:21.362 "num_base_bdevs": 2, 00:17:21.362 "num_base_bdevs_discovered": 1, 00:17:21.362 "num_base_bdevs_operational": 2, 00:17:21.362 "base_bdevs_list": [ 00:17:21.362 { 00:17:21.362 "name": "pt1", 00:17:21.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.362 "is_configured": true, 00:17:21.362 "data_offset": 256, 00:17:21.362 "data_size": 7936 00:17:21.362 }, 00:17:21.362 { 00:17:21.362 "name": null, 00:17:21.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.362 "is_configured": false, 00:17:21.362 "data_offset": 256, 00:17:21.362 "data_size": 7936 00:17:21.362 } 00:17:21.362 ] 00:17:21.362 }' 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.362 08:53:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.931 [2024-10-05 08:53:58.210614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:21.931 [2024-10-05 08:53:58.210727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.931 [2024-10-05 08:53:58.210781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:21.931 [2024-10-05 08:53:58.210810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.931 [2024-10-05 08:53:58.211245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.931 [2024-10-05 08:53:58.211302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:21.931 [2024-10-05 08:53:58.211394] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:21.931 [2024-10-05 08:53:58.211445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.931 [2024-10-05 08:53:58.211563] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:21.931 [2024-10-05 08:53:58.211604] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:21.931 [2024-10-05 08:53:58.211843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:21.931 [2024-10-05 08:53:58.212041] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:21.931 [2024-10-05 08:53:58.212083] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:21.931 [2024-10-05 08:53:58.212243] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.931 pt2 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.931 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.931 "name": "raid_bdev1", 00:17:21.931 "uuid": "49944ea9-930a-4df9-8bb2-5ed985965b64", 00:17:21.931 "strip_size_kb": 0, 00:17:21.931 "state": "online", 00:17:21.931 "raid_level": "raid1", 00:17:21.931 "superblock": true, 00:17:21.931 "num_base_bdevs": 2, 00:17:21.931 "num_base_bdevs_discovered": 2, 00:17:21.932 "num_base_bdevs_operational": 2, 00:17:21.932 "base_bdevs_list": [ 00:17:21.932 { 00:17:21.932 "name": "pt1", 00:17:21.932 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.932 "is_configured": true, 00:17:21.932 "data_offset": 256, 00:17:21.932 "data_size": 7936 00:17:21.932 }, 00:17:21.932 { 00:17:21.932 "name": "pt2", 00:17:21.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.932 "is_configured": true, 00:17:21.932 "data_offset": 256, 00:17:21.932 "data_size": 7936 00:17:21.932 } 00:17:21.932 ] 00:17:21.932 }' 00:17:21.932 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.932 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.500 [2024-10-05 08:53:58.697998] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:22.500 "name": "raid_bdev1", 00:17:22.500 "aliases": [ 00:17:22.500 "49944ea9-930a-4df9-8bb2-5ed985965b64" 00:17:22.500 ], 00:17:22.500 "product_name": "Raid Volume", 00:17:22.500 "block_size": 4096, 00:17:22.500 "num_blocks": 7936, 00:17:22.500 "uuid": "49944ea9-930a-4df9-8bb2-5ed985965b64", 00:17:22.500 "assigned_rate_limits": { 00:17:22.500 "rw_ios_per_sec": 0, 00:17:22.500 "rw_mbytes_per_sec": 0, 00:17:22.500 "r_mbytes_per_sec": 0, 00:17:22.500 "w_mbytes_per_sec": 0 00:17:22.500 }, 00:17:22.500 "claimed": false, 00:17:22.500 "zoned": false, 00:17:22.500 "supported_io_types": { 00:17:22.500 "read": true, 00:17:22.500 "write": true, 00:17:22.500 "unmap": false, 00:17:22.500 "flush": false, 00:17:22.500 "reset": true, 00:17:22.500 "nvme_admin": false, 00:17:22.500 "nvme_io": false, 00:17:22.500 "nvme_io_md": false, 00:17:22.500 "write_zeroes": true, 00:17:22.500 "zcopy": false, 00:17:22.500 "get_zone_info": false, 00:17:22.500 "zone_management": false, 00:17:22.500 "zone_append": false, 00:17:22.500 "compare": false, 00:17:22.500 "compare_and_write": false, 00:17:22.500 "abort": false, 00:17:22.500 "seek_hole": false, 00:17:22.500 "seek_data": false, 00:17:22.500 "copy": false, 00:17:22.500 "nvme_iov_md": false 00:17:22.500 }, 00:17:22.500 "memory_domains": [ 00:17:22.500 { 00:17:22.500 "dma_device_id": "system", 00:17:22.500 "dma_device_type": 1 00:17:22.500 }, 00:17:22.500 { 00:17:22.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.500 "dma_device_type": 2 00:17:22.500 }, 00:17:22.500 { 00:17:22.500 "dma_device_id": "system", 00:17:22.500 "dma_device_type": 1 00:17:22.500 }, 00:17:22.500 { 00:17:22.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.500 "dma_device_type": 2 00:17:22.500 } 00:17:22.500 ], 00:17:22.500 "driver_specific": { 00:17:22.500 "raid": { 00:17:22.500 "uuid": "49944ea9-930a-4df9-8bb2-5ed985965b64", 00:17:22.500 "strip_size_kb": 0, 00:17:22.500 "state": "online", 00:17:22.500 "raid_level": "raid1", 00:17:22.500 "superblock": true, 00:17:22.500 "num_base_bdevs": 2, 00:17:22.500 "num_base_bdevs_discovered": 2, 00:17:22.500 "num_base_bdevs_operational": 2, 00:17:22.500 "base_bdevs_list": [ 00:17:22.500 { 00:17:22.500 "name": "pt1", 00:17:22.500 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:22.500 "is_configured": true, 00:17:22.500 "data_offset": 256, 00:17:22.500 "data_size": 7936 00:17:22.500 }, 00:17:22.500 { 00:17:22.500 "name": "pt2", 00:17:22.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.500 "is_configured": true, 00:17:22.500 "data_offset": 256, 00:17:22.500 "data_size": 7936 00:17:22.500 } 00:17:22.500 ] 00:17:22.500 } 00:17:22.500 } 00:17:22.500 }' 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:22.500 pt2' 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.500 [2024-10-05 08:53:58.933560] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 49944ea9-930a-4df9-8bb2-5ed985965b64 '!=' 49944ea9-930a-4df9-8bb2-5ed985965b64 ']' 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.500 [2024-10-05 08:53:58.961335] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.500 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.761 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.761 08:53:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.761 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.761 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.761 08:53:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.761 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.761 "name": "raid_bdev1", 00:17:22.761 "uuid": "49944ea9-930a-4df9-8bb2-5ed985965b64", 00:17:22.761 "strip_size_kb": 0, 00:17:22.761 "state": "online", 00:17:22.761 "raid_level": "raid1", 00:17:22.761 "superblock": true, 00:17:22.761 "num_base_bdevs": 2, 00:17:22.761 "num_base_bdevs_discovered": 1, 00:17:22.761 "num_base_bdevs_operational": 1, 00:17:22.761 "base_bdevs_list": [ 00:17:22.761 { 00:17:22.761 "name": null, 00:17:22.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.761 "is_configured": false, 00:17:22.761 "data_offset": 0, 00:17:22.761 "data_size": 7936 00:17:22.761 }, 00:17:22.761 { 00:17:22.761 "name": "pt2", 00:17:22.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.761 "is_configured": true, 00:17:22.761 "data_offset": 256, 00:17:22.761 "data_size": 7936 00:17:22.761 } 00:17:22.761 ] 00:17:22.761 }' 00:17:22.761 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.761 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.021 [2024-10-05 08:53:59.392658] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.021 [2024-10-05 08:53:59.392725] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.021 [2024-10-05 08:53:59.392811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.021 [2024-10-05 08:53:59.392868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.021 [2024-10-05 08:53:59.392901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.021 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.021 [2024-10-05 08:53:59.464540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:23.021 [2024-10-05 08:53:59.464591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.022 [2024-10-05 08:53:59.464606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:23.022 [2024-10-05 08:53:59.464617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.022 [2024-10-05 08:53:59.466799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.022 [2024-10-05 08:53:59.466838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:23.022 [2024-10-05 08:53:59.466910] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:23.022 [2024-10-05 08:53:59.466963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:23.022 [2024-10-05 08:53:59.467082] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:23.022 [2024-10-05 08:53:59.467126] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:23.022 [2024-10-05 08:53:59.467348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:23.022 [2024-10-05 08:53:59.467485] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:23.022 [2024-10-05 08:53:59.467494] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:23.022 [2024-10-05 08:53:59.467631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.022 pt2 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.022 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.282 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.282 "name": "raid_bdev1", 00:17:23.282 "uuid": "49944ea9-930a-4df9-8bb2-5ed985965b64", 00:17:23.282 "strip_size_kb": 0, 00:17:23.282 "state": "online", 00:17:23.282 "raid_level": "raid1", 00:17:23.282 "superblock": true, 00:17:23.282 "num_base_bdevs": 2, 00:17:23.282 "num_base_bdevs_discovered": 1, 00:17:23.282 "num_base_bdevs_operational": 1, 00:17:23.282 "base_bdevs_list": [ 00:17:23.282 { 00:17:23.282 "name": null, 00:17:23.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.282 "is_configured": false, 00:17:23.282 "data_offset": 256, 00:17:23.282 "data_size": 7936 00:17:23.282 }, 00:17:23.282 { 00:17:23.282 "name": "pt2", 00:17:23.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.282 "is_configured": true, 00:17:23.282 "data_offset": 256, 00:17:23.282 "data_size": 7936 00:17:23.282 } 00:17:23.282 ] 00:17:23.282 }' 00:17:23.282 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.282 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.556 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:23.556 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.556 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.556 [2024-10-05 08:53:59.875779] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.556 [2024-10-05 08:53:59.875847] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.557 [2024-10-05 08:53:59.875913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.557 [2024-10-05 08:53:59.875979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.557 [2024-10-05 08:53:59.876011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.557 [2024-10-05 08:53:59.939698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:23.557 [2024-10-05 08:53:59.939782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.557 [2024-10-05 08:53:59.939815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:23.557 [2024-10-05 08:53:59.939843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.557 [2024-10-05 08:53:59.941969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.557 [2024-10-05 08:53:59.942036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:23.557 [2024-10-05 08:53:59.942132] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:23.557 [2024-10-05 08:53:59.942199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:23.557 [2024-10-05 08:53:59.942346] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:23.557 [2024-10-05 08:53:59.942397] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.557 [2024-10-05 08:53:59.942438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:23.557 [2024-10-05 08:53:59.942551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:23.557 [2024-10-05 08:53:59.942659] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:23.557 [2024-10-05 08:53:59.942695] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:23.557 [2024-10-05 08:53:59.942928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:23.557 [2024-10-05 08:53:59.943103] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:23.557 [2024-10-05 08:53:59.943147] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:23.557 [2024-10-05 08:53:59.943325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.557 pt1 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.557 08:53:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.557 08:54:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.558 "name": "raid_bdev1", 00:17:23.558 "uuid": "49944ea9-930a-4df9-8bb2-5ed985965b64", 00:17:23.558 "strip_size_kb": 0, 00:17:23.558 "state": "online", 00:17:23.558 "raid_level": "raid1", 00:17:23.558 "superblock": true, 00:17:23.558 "num_base_bdevs": 2, 00:17:23.558 "num_base_bdevs_discovered": 1, 00:17:23.558 "num_base_bdevs_operational": 1, 00:17:23.558 "base_bdevs_list": [ 00:17:23.558 { 00:17:23.558 "name": null, 00:17:23.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.558 "is_configured": false, 00:17:23.558 "data_offset": 256, 00:17:23.558 "data_size": 7936 00:17:23.558 }, 00:17:23.558 { 00:17:23.558 "name": "pt2", 00:17:23.558 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.558 "is_configured": true, 00:17:23.558 "data_offset": 256, 00:17:23.558 "data_size": 7936 00:17:23.558 } 00:17:23.558 ] 00:17:23.558 }' 00:17:23.558 08:54:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.558 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.133 [2024-10-05 08:54:00.482955] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 49944ea9-930a-4df9-8bb2-5ed985965b64 '!=' 49944ea9-930a-4df9-8bb2-5ed985965b64 ']' 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 82546 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 82546 ']' 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 82546 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82546 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:24.133 killing process with pid 82546 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82546' 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 82546 00:17:24.133 [2024-10-05 08:54:00.559230] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.133 [2024-10-05 08:54:00.559310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.133 [2024-10-05 08:54:00.559352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.133 [2024-10-05 08:54:00.559365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:24.133 08:54:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 82546 00:17:24.399 [2024-10-05 08:54:00.752965] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:25.825 ************************************ 00:17:25.825 END TEST raid_superblock_test_4k 00:17:25.825 ************************************ 00:17:25.825 08:54:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:25.825 00:17:25.825 real 0m6.094s 00:17:25.825 user 0m9.147s 00:17:25.825 sys 0m1.106s 00:17:25.825 08:54:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.825 08:54:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.825 08:54:02 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:25.825 08:54:02 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:25.825 08:54:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:25.825 08:54:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.825 08:54:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:25.825 ************************************ 00:17:25.825 START TEST raid_rebuild_test_sb_4k 00:17:25.825 ************************************ 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=82833 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 82833 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 82833 ']' 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:25.825 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.825 [2024-10-05 08:54:02.128921] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:25.825 [2024-10-05 08:54:02.129597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:25.825 Zero copy mechanism will not be used. 00:17:25.825 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82833 ] 00:17:25.825 [2024-10-05 08:54:02.292444] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.085 [2024-10-05 08:54:02.485515] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.344 [2024-10-05 08:54:02.663673] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.344 [2024-10-05 08:54:02.663796] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.605 BaseBdev1_malloc 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.605 [2024-10-05 08:54:02.982323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:26.605 [2024-10-05 08:54:02.982426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.605 [2024-10-05 08:54:02.982467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:26.605 [2024-10-05 08:54:02.982500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.605 [2024-10-05 08:54:02.984450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.605 [2024-10-05 08:54:02.984487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:26.605 BaseBdev1 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.605 08:54:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.605 BaseBdev2_malloc 00:17:26.605 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.605 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:26.605 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.605 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.605 [2024-10-05 08:54:03.064469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:26.605 [2024-10-05 08:54:03.064528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.605 [2024-10-05 08:54:03.064548] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:26.605 [2024-10-05 08:54:03.064559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.605 [2024-10-05 08:54:03.066612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.605 [2024-10-05 08:54:03.066667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:26.605 BaseBdev2 00:17:26.605 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.605 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:26.605 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.605 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.865 spare_malloc 00:17:26.865 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.865 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:26.865 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.865 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.865 spare_delay 00:17:26.865 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.865 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:26.865 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.865 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.865 [2024-10-05 08:54:03.126471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:26.865 [2024-10-05 08:54:03.126523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.865 [2024-10-05 08:54:03.126542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:26.865 [2024-10-05 08:54:03.126552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.865 [2024-10-05 08:54:03.128515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.865 [2024-10-05 08:54:03.128555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:26.865 spare 00:17:26.865 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.865 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:26.865 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.866 [2024-10-05 08:54:03.138497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.866 [2024-10-05 08:54:03.140178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:26.866 [2024-10-05 08:54:03.140338] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:26.866 [2024-10-05 08:54:03.140353] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:26.866 [2024-10-05 08:54:03.140576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:26.866 [2024-10-05 08:54:03.140727] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:26.866 [2024-10-05 08:54:03.140735] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:26.866 [2024-10-05 08:54:03.140865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.866 "name": "raid_bdev1", 00:17:26.866 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:26.866 "strip_size_kb": 0, 00:17:26.866 "state": "online", 00:17:26.866 "raid_level": "raid1", 00:17:26.866 "superblock": true, 00:17:26.866 "num_base_bdevs": 2, 00:17:26.866 "num_base_bdevs_discovered": 2, 00:17:26.866 "num_base_bdevs_operational": 2, 00:17:26.866 "base_bdevs_list": [ 00:17:26.866 { 00:17:26.866 "name": "BaseBdev1", 00:17:26.866 "uuid": "90f28f95-a1a9-5f1b-bc0a-f684bf673bb2", 00:17:26.866 "is_configured": true, 00:17:26.866 "data_offset": 256, 00:17:26.866 "data_size": 7936 00:17:26.866 }, 00:17:26.866 { 00:17:26.866 "name": "BaseBdev2", 00:17:26.866 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:26.866 "is_configured": true, 00:17:26.866 "data_offset": 256, 00:17:26.866 "data_size": 7936 00:17:26.866 } 00:17:26.866 ] 00:17:26.866 }' 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.866 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.435 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:27.435 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:27.436 [2024-10-05 08:54:03.617935] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:27.436 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:27.436 [2024-10-05 08:54:03.893315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:27.695 /dev/nbd0 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.695 1+0 records in 00:17:27.695 1+0 records out 00:17:27.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379281 s, 10.8 MB/s 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.695 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:27.696 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:27.696 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.696 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:27.696 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:27.696 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:27.696 08:54:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:28.265 7936+0 records in 00:17:28.265 7936+0 records out 00:17:28.265 32505856 bytes (33 MB, 31 MiB) copied, 0.621045 s, 52.3 MB/s 00:17:28.265 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:28.265 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:28.265 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:28.265 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:28.265 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:28.265 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:28.265 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:28.526 [2024-10-05 08:54:04.792221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.526 [2024-10-05 08:54:04.820368] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.526 "name": "raid_bdev1", 00:17:28.526 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:28.526 "strip_size_kb": 0, 00:17:28.526 "state": "online", 00:17:28.526 "raid_level": "raid1", 00:17:28.526 "superblock": true, 00:17:28.526 "num_base_bdevs": 2, 00:17:28.526 "num_base_bdevs_discovered": 1, 00:17:28.526 "num_base_bdevs_operational": 1, 00:17:28.526 "base_bdevs_list": [ 00:17:28.526 { 00:17:28.526 "name": null, 00:17:28.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.526 "is_configured": false, 00:17:28.526 "data_offset": 0, 00:17:28.526 "data_size": 7936 00:17:28.526 }, 00:17:28.526 { 00:17:28.526 "name": "BaseBdev2", 00:17:28.526 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:28.526 "is_configured": true, 00:17:28.526 "data_offset": 256, 00:17:28.526 "data_size": 7936 00:17:28.526 } 00:17:28.526 ] 00:17:28.526 }' 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.526 08:54:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.097 08:54:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:29.097 08:54:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.097 08:54:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.097 [2024-10-05 08:54:05.267871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:29.097 [2024-10-05 08:54:05.282615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:29.097 08:54:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.097 08:54:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:29.097 [2024-10-05 08:54:05.284840] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.038 "name": "raid_bdev1", 00:17:30.038 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:30.038 "strip_size_kb": 0, 00:17:30.038 "state": "online", 00:17:30.038 "raid_level": "raid1", 00:17:30.038 "superblock": true, 00:17:30.038 "num_base_bdevs": 2, 00:17:30.038 "num_base_bdevs_discovered": 2, 00:17:30.038 "num_base_bdevs_operational": 2, 00:17:30.038 "process": { 00:17:30.038 "type": "rebuild", 00:17:30.038 "target": "spare", 00:17:30.038 "progress": { 00:17:30.038 "blocks": 2560, 00:17:30.038 "percent": 32 00:17:30.038 } 00:17:30.038 }, 00:17:30.038 "base_bdevs_list": [ 00:17:30.038 { 00:17:30.038 "name": "spare", 00:17:30.038 "uuid": "73fff75f-9376-5a03-b56a-d2718297914d", 00:17:30.038 "is_configured": true, 00:17:30.038 "data_offset": 256, 00:17:30.038 "data_size": 7936 00:17:30.038 }, 00:17:30.038 { 00:17:30.038 "name": "BaseBdev2", 00:17:30.038 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:30.038 "is_configured": true, 00:17:30.038 "data_offset": 256, 00:17:30.038 "data_size": 7936 00:17:30.038 } 00:17:30.038 ] 00:17:30.038 }' 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.038 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.038 [2024-10-05 08:54:06.440167] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.038 [2024-10-05 08:54:06.493767] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:30.038 [2024-10-05 08:54:06.493843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.038 [2024-10-05 08:54:06.493862] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:30.038 [2024-10-05 08:54:06.493877] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.300 "name": "raid_bdev1", 00:17:30.300 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:30.300 "strip_size_kb": 0, 00:17:30.300 "state": "online", 00:17:30.300 "raid_level": "raid1", 00:17:30.300 "superblock": true, 00:17:30.300 "num_base_bdevs": 2, 00:17:30.300 "num_base_bdevs_discovered": 1, 00:17:30.300 "num_base_bdevs_operational": 1, 00:17:30.300 "base_bdevs_list": [ 00:17:30.300 { 00:17:30.300 "name": null, 00:17:30.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.300 "is_configured": false, 00:17:30.300 "data_offset": 0, 00:17:30.300 "data_size": 7936 00:17:30.300 }, 00:17:30.300 { 00:17:30.300 "name": "BaseBdev2", 00:17:30.300 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:30.300 "is_configured": true, 00:17:30.300 "data_offset": 256, 00:17:30.300 "data_size": 7936 00:17:30.300 } 00:17:30.300 ] 00:17:30.300 }' 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.300 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.563 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.563 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.563 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:30.563 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:30.563 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.563 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.563 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.563 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.563 08:54:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.563 08:54:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.823 08:54:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.823 "name": "raid_bdev1", 00:17:30.823 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:30.823 "strip_size_kb": 0, 00:17:30.823 "state": "online", 00:17:30.823 "raid_level": "raid1", 00:17:30.823 "superblock": true, 00:17:30.823 "num_base_bdevs": 2, 00:17:30.823 "num_base_bdevs_discovered": 1, 00:17:30.823 "num_base_bdevs_operational": 1, 00:17:30.823 "base_bdevs_list": [ 00:17:30.823 { 00:17:30.823 "name": null, 00:17:30.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.823 "is_configured": false, 00:17:30.823 "data_offset": 0, 00:17:30.823 "data_size": 7936 00:17:30.823 }, 00:17:30.823 { 00:17:30.823 "name": "BaseBdev2", 00:17:30.823 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:30.823 "is_configured": true, 00:17:30.823 "data_offset": 256, 00:17:30.823 "data_size": 7936 00:17:30.823 } 00:17:30.823 ] 00:17:30.823 }' 00:17:30.823 08:54:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.823 08:54:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:30.823 08:54:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.823 08:54:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:30.823 08:54:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:30.823 08:54:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.823 08:54:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.823 [2024-10-05 08:54:07.134130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:30.823 [2024-10-05 08:54:07.150117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:30.823 08:54:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.823 08:54:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:30.823 [2024-10-05 08:54:07.152315] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:31.762 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.762 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.762 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.762 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.762 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.762 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.762 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.762 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.762 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.762 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.762 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.762 "name": "raid_bdev1", 00:17:31.762 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:31.762 "strip_size_kb": 0, 00:17:31.762 "state": "online", 00:17:31.762 "raid_level": "raid1", 00:17:31.762 "superblock": true, 00:17:31.762 "num_base_bdevs": 2, 00:17:31.762 "num_base_bdevs_discovered": 2, 00:17:31.762 "num_base_bdevs_operational": 2, 00:17:31.762 "process": { 00:17:31.762 "type": "rebuild", 00:17:31.762 "target": "spare", 00:17:31.762 "progress": { 00:17:31.763 "blocks": 2560, 00:17:31.763 "percent": 32 00:17:31.763 } 00:17:31.763 }, 00:17:31.763 "base_bdevs_list": [ 00:17:31.763 { 00:17:31.763 "name": "spare", 00:17:31.763 "uuid": "73fff75f-9376-5a03-b56a-d2718297914d", 00:17:31.763 "is_configured": true, 00:17:31.763 "data_offset": 256, 00:17:31.763 "data_size": 7936 00:17:31.763 }, 00:17:31.763 { 00:17:31.763 "name": "BaseBdev2", 00:17:31.763 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:31.763 "is_configured": true, 00:17:31.763 "data_offset": 256, 00:17:31.763 "data_size": 7936 00:17:31.763 } 00:17:31.763 ] 00:17:31.763 }' 00:17:31.763 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:32.022 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=680 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.022 "name": "raid_bdev1", 00:17:32.022 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:32.022 "strip_size_kb": 0, 00:17:32.022 "state": "online", 00:17:32.022 "raid_level": "raid1", 00:17:32.022 "superblock": true, 00:17:32.022 "num_base_bdevs": 2, 00:17:32.022 "num_base_bdevs_discovered": 2, 00:17:32.022 "num_base_bdevs_operational": 2, 00:17:32.022 "process": { 00:17:32.022 "type": "rebuild", 00:17:32.022 "target": "spare", 00:17:32.022 "progress": { 00:17:32.022 "blocks": 2816, 00:17:32.022 "percent": 35 00:17:32.022 } 00:17:32.022 }, 00:17:32.022 "base_bdevs_list": [ 00:17:32.022 { 00:17:32.022 "name": "spare", 00:17:32.022 "uuid": "73fff75f-9376-5a03-b56a-d2718297914d", 00:17:32.022 "is_configured": true, 00:17:32.022 "data_offset": 256, 00:17:32.022 "data_size": 7936 00:17:32.022 }, 00:17:32.022 { 00:17:32.022 "name": "BaseBdev2", 00:17:32.022 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:32.022 "is_configured": true, 00:17:32.022 "data_offset": 256, 00:17:32.022 "data_size": 7936 00:17:32.022 } 00:17:32.022 ] 00:17:32.022 }' 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.022 08:54:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.402 "name": "raid_bdev1", 00:17:33.402 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:33.402 "strip_size_kb": 0, 00:17:33.402 "state": "online", 00:17:33.402 "raid_level": "raid1", 00:17:33.402 "superblock": true, 00:17:33.402 "num_base_bdevs": 2, 00:17:33.402 "num_base_bdevs_discovered": 2, 00:17:33.402 "num_base_bdevs_operational": 2, 00:17:33.402 "process": { 00:17:33.402 "type": "rebuild", 00:17:33.402 "target": "spare", 00:17:33.402 "progress": { 00:17:33.402 "blocks": 5632, 00:17:33.402 "percent": 70 00:17:33.402 } 00:17:33.402 }, 00:17:33.402 "base_bdevs_list": [ 00:17:33.402 { 00:17:33.402 "name": "spare", 00:17:33.402 "uuid": "73fff75f-9376-5a03-b56a-d2718297914d", 00:17:33.402 "is_configured": true, 00:17:33.402 "data_offset": 256, 00:17:33.402 "data_size": 7936 00:17:33.402 }, 00:17:33.402 { 00:17:33.402 "name": "BaseBdev2", 00:17:33.402 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:33.402 "is_configured": true, 00:17:33.402 "data_offset": 256, 00:17:33.402 "data_size": 7936 00:17:33.402 } 00:17:33.402 ] 00:17:33.402 }' 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.402 08:54:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:33.971 [2024-10-05 08:54:10.274564] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:33.971 [2024-10-05 08:54:10.274657] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:33.971 [2024-10-05 08:54:10.274778] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.230 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.230 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.230 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.230 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.230 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.230 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.230 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.230 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.230 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.230 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.230 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.230 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.230 "name": "raid_bdev1", 00:17:34.230 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:34.230 "strip_size_kb": 0, 00:17:34.230 "state": "online", 00:17:34.230 "raid_level": "raid1", 00:17:34.230 "superblock": true, 00:17:34.230 "num_base_bdevs": 2, 00:17:34.230 "num_base_bdevs_discovered": 2, 00:17:34.230 "num_base_bdevs_operational": 2, 00:17:34.230 "base_bdevs_list": [ 00:17:34.230 { 00:17:34.230 "name": "spare", 00:17:34.231 "uuid": "73fff75f-9376-5a03-b56a-d2718297914d", 00:17:34.231 "is_configured": true, 00:17:34.231 "data_offset": 256, 00:17:34.231 "data_size": 7936 00:17:34.231 }, 00:17:34.231 { 00:17:34.231 "name": "BaseBdev2", 00:17:34.231 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:34.231 "is_configured": true, 00:17:34.231 "data_offset": 256, 00:17:34.231 "data_size": 7936 00:17:34.231 } 00:17:34.231 ] 00:17:34.231 }' 00:17:34.231 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.231 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:34.231 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.490 "name": "raid_bdev1", 00:17:34.490 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:34.490 "strip_size_kb": 0, 00:17:34.490 "state": "online", 00:17:34.490 "raid_level": "raid1", 00:17:34.490 "superblock": true, 00:17:34.490 "num_base_bdevs": 2, 00:17:34.490 "num_base_bdevs_discovered": 2, 00:17:34.490 "num_base_bdevs_operational": 2, 00:17:34.490 "base_bdevs_list": [ 00:17:34.490 { 00:17:34.490 "name": "spare", 00:17:34.490 "uuid": "73fff75f-9376-5a03-b56a-d2718297914d", 00:17:34.490 "is_configured": true, 00:17:34.490 "data_offset": 256, 00:17:34.490 "data_size": 7936 00:17:34.490 }, 00:17:34.490 { 00:17:34.490 "name": "BaseBdev2", 00:17:34.490 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:34.490 "is_configured": true, 00:17:34.490 "data_offset": 256, 00:17:34.490 "data_size": 7936 00:17:34.490 } 00:17:34.490 ] 00:17:34.490 }' 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.490 "name": "raid_bdev1", 00:17:34.490 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:34.490 "strip_size_kb": 0, 00:17:34.490 "state": "online", 00:17:34.490 "raid_level": "raid1", 00:17:34.490 "superblock": true, 00:17:34.490 "num_base_bdevs": 2, 00:17:34.490 "num_base_bdevs_discovered": 2, 00:17:34.490 "num_base_bdevs_operational": 2, 00:17:34.490 "base_bdevs_list": [ 00:17:34.490 { 00:17:34.490 "name": "spare", 00:17:34.490 "uuid": "73fff75f-9376-5a03-b56a-d2718297914d", 00:17:34.490 "is_configured": true, 00:17:34.490 "data_offset": 256, 00:17:34.490 "data_size": 7936 00:17:34.490 }, 00:17:34.490 { 00:17:34.490 "name": "BaseBdev2", 00:17:34.490 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:34.490 "is_configured": true, 00:17:34.490 "data_offset": 256, 00:17:34.490 "data_size": 7936 00:17:34.490 } 00:17:34.490 ] 00:17:34.490 }' 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.490 08:54:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.057 [2024-10-05 08:54:11.317057] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.057 [2024-10-05 08:54:11.317176] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:35.057 [2024-10-05 08:54:11.317315] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.057 [2024-10-05 08:54:11.317414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.057 [2024-10-05 08:54:11.317469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:35.057 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:35.315 /dev/nbd0 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:35.315 1+0 records in 00:17:35.315 1+0 records out 00:17:35.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529252 s, 7.7 MB/s 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:35.315 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:35.574 /dev/nbd1 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:35.574 1+0 records in 00:17:35.574 1+0 records out 00:17:35.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332405 s, 12.3 MB/s 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:35.574 08:54:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:35.574 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:35.574 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:35.574 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:35.574 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:35.574 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:35.574 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:35.574 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:35.856 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:35.856 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:35.856 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:35.856 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:35.856 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:35.856 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:35.856 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:35.856 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:35.856 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:35.856 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.114 [2024-10-05 08:54:12.498244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:36.114 [2024-10-05 08:54:12.498390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.114 [2024-10-05 08:54:12.498425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:36.114 [2024-10-05 08:54:12.498438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.114 [2024-10-05 08:54:12.501063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.114 [2024-10-05 08:54:12.501102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:36.114 [2024-10-05 08:54:12.501259] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:36.114 [2024-10-05 08:54:12.501323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:36.114 [2024-10-05 08:54:12.501513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.114 spare 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.114 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.372 [2024-10-05 08:54:12.601447] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:36.372 [2024-10-05 08:54:12.601483] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:36.372 [2024-10-05 08:54:12.601780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:36.372 [2024-10-05 08:54:12.601989] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:36.372 [2024-10-05 08:54:12.602001] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:36.372 [2024-10-05 08:54:12.602196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.372 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.372 "name": "raid_bdev1", 00:17:36.372 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:36.372 "strip_size_kb": 0, 00:17:36.372 "state": "online", 00:17:36.372 "raid_level": "raid1", 00:17:36.372 "superblock": true, 00:17:36.372 "num_base_bdevs": 2, 00:17:36.372 "num_base_bdevs_discovered": 2, 00:17:36.372 "num_base_bdevs_operational": 2, 00:17:36.372 "base_bdevs_list": [ 00:17:36.372 { 00:17:36.372 "name": "spare", 00:17:36.372 "uuid": "73fff75f-9376-5a03-b56a-d2718297914d", 00:17:36.373 "is_configured": true, 00:17:36.373 "data_offset": 256, 00:17:36.373 "data_size": 7936 00:17:36.373 }, 00:17:36.373 { 00:17:36.373 "name": "BaseBdev2", 00:17:36.373 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:36.373 "is_configured": true, 00:17:36.373 "data_offset": 256, 00:17:36.373 "data_size": 7936 00:17:36.373 } 00:17:36.373 ] 00:17:36.373 }' 00:17:36.373 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.373 08:54:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.631 "name": "raid_bdev1", 00:17:36.631 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:36.631 "strip_size_kb": 0, 00:17:36.631 "state": "online", 00:17:36.631 "raid_level": "raid1", 00:17:36.631 "superblock": true, 00:17:36.631 "num_base_bdevs": 2, 00:17:36.631 "num_base_bdevs_discovered": 2, 00:17:36.631 "num_base_bdevs_operational": 2, 00:17:36.631 "base_bdevs_list": [ 00:17:36.631 { 00:17:36.631 "name": "spare", 00:17:36.631 "uuid": "73fff75f-9376-5a03-b56a-d2718297914d", 00:17:36.631 "is_configured": true, 00:17:36.631 "data_offset": 256, 00:17:36.631 "data_size": 7936 00:17:36.631 }, 00:17:36.631 { 00:17:36.631 "name": "BaseBdev2", 00:17:36.631 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:36.631 "is_configured": true, 00:17:36.631 "data_offset": 256, 00:17:36.631 "data_size": 7936 00:17:36.631 } 00:17:36.631 ] 00:17:36.631 }' 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:36.631 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.888 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.888 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.888 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.888 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.888 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:36.888 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.888 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.888 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:36.888 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.889 [2024-10-05 08:54:13.189290] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.889 "name": "raid_bdev1", 00:17:36.889 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:36.889 "strip_size_kb": 0, 00:17:36.889 "state": "online", 00:17:36.889 "raid_level": "raid1", 00:17:36.889 "superblock": true, 00:17:36.889 "num_base_bdevs": 2, 00:17:36.889 "num_base_bdevs_discovered": 1, 00:17:36.889 "num_base_bdevs_operational": 1, 00:17:36.889 "base_bdevs_list": [ 00:17:36.889 { 00:17:36.889 "name": null, 00:17:36.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.889 "is_configured": false, 00:17:36.889 "data_offset": 0, 00:17:36.889 "data_size": 7936 00:17:36.889 }, 00:17:36.889 { 00:17:36.889 "name": "BaseBdev2", 00:17:36.889 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:36.889 "is_configured": true, 00:17:36.889 "data_offset": 256, 00:17:36.889 "data_size": 7936 00:17:36.889 } 00:17:36.889 ] 00:17:36.889 }' 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.889 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.457 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:37.457 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.457 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.457 [2024-10-05 08:54:13.676682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.457 [2024-10-05 08:54:13.676959] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:37.457 [2024-10-05 08:54:13.677064] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:37.457 [2024-10-05 08:54:13.677136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.457 [2024-10-05 08:54:13.692768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:37.457 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.457 08:54:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:37.457 [2024-10-05 08:54:13.695007] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.396 "name": "raid_bdev1", 00:17:38.396 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:38.396 "strip_size_kb": 0, 00:17:38.396 "state": "online", 00:17:38.396 "raid_level": "raid1", 00:17:38.396 "superblock": true, 00:17:38.396 "num_base_bdevs": 2, 00:17:38.396 "num_base_bdevs_discovered": 2, 00:17:38.396 "num_base_bdevs_operational": 2, 00:17:38.396 "process": { 00:17:38.396 "type": "rebuild", 00:17:38.396 "target": "spare", 00:17:38.396 "progress": { 00:17:38.396 "blocks": 2560, 00:17:38.396 "percent": 32 00:17:38.396 } 00:17:38.396 }, 00:17:38.396 "base_bdevs_list": [ 00:17:38.396 { 00:17:38.396 "name": "spare", 00:17:38.396 "uuid": "73fff75f-9376-5a03-b56a-d2718297914d", 00:17:38.396 "is_configured": true, 00:17:38.396 "data_offset": 256, 00:17:38.396 "data_size": 7936 00:17:38.396 }, 00:17:38.396 { 00:17:38.396 "name": "BaseBdev2", 00:17:38.396 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:38.396 "is_configured": true, 00:17:38.396 "data_offset": 256, 00:17:38.396 "data_size": 7936 00:17:38.396 } 00:17:38.396 ] 00:17:38.396 }' 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.396 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.396 [2024-10-05 08:54:14.858204] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.656 [2024-10-05 08:54:14.903751] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:38.656 [2024-10-05 08:54:14.903900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.656 [2024-10-05 08:54:14.903943] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.656 [2024-10-05 08:54:14.903986] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.656 "name": "raid_bdev1", 00:17:38.656 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:38.656 "strip_size_kb": 0, 00:17:38.656 "state": "online", 00:17:38.656 "raid_level": "raid1", 00:17:38.656 "superblock": true, 00:17:38.656 "num_base_bdevs": 2, 00:17:38.656 "num_base_bdevs_discovered": 1, 00:17:38.656 "num_base_bdevs_operational": 1, 00:17:38.656 "base_bdevs_list": [ 00:17:38.656 { 00:17:38.656 "name": null, 00:17:38.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.656 "is_configured": false, 00:17:38.656 "data_offset": 0, 00:17:38.656 "data_size": 7936 00:17:38.656 }, 00:17:38.656 { 00:17:38.656 "name": "BaseBdev2", 00:17:38.656 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:38.656 "is_configured": true, 00:17:38.656 "data_offset": 256, 00:17:38.656 "data_size": 7936 00:17:38.656 } 00:17:38.656 ] 00:17:38.656 }' 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.656 08:54:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.915 08:54:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:38.915 08:54:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.915 08:54:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.175 [2024-10-05 08:54:15.387199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:39.175 [2024-10-05 08:54:15.387334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.175 [2024-10-05 08:54:15.387382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:39.175 [2024-10-05 08:54:15.387438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.175 [2024-10-05 08:54:15.388096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.175 [2024-10-05 08:54:15.388183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:39.175 [2024-10-05 08:54:15.388323] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:39.175 [2024-10-05 08:54:15.388374] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:39.175 [2024-10-05 08:54:15.388425] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:39.175 [2024-10-05 08:54:15.388505] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.175 [2024-10-05 08:54:15.403889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:39.175 spare 00:17:39.175 08:54:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.175 08:54:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:39.175 [2024-10-05 08:54:15.406123] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.147 "name": "raid_bdev1", 00:17:40.147 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:40.147 "strip_size_kb": 0, 00:17:40.147 "state": "online", 00:17:40.147 "raid_level": "raid1", 00:17:40.147 "superblock": true, 00:17:40.147 "num_base_bdevs": 2, 00:17:40.147 "num_base_bdevs_discovered": 2, 00:17:40.147 "num_base_bdevs_operational": 2, 00:17:40.147 "process": { 00:17:40.147 "type": "rebuild", 00:17:40.147 "target": "spare", 00:17:40.147 "progress": { 00:17:40.147 "blocks": 2560, 00:17:40.147 "percent": 32 00:17:40.147 } 00:17:40.147 }, 00:17:40.147 "base_bdevs_list": [ 00:17:40.147 { 00:17:40.147 "name": "spare", 00:17:40.147 "uuid": "73fff75f-9376-5a03-b56a-d2718297914d", 00:17:40.147 "is_configured": true, 00:17:40.147 "data_offset": 256, 00:17:40.147 "data_size": 7936 00:17:40.147 }, 00:17:40.147 { 00:17:40.147 "name": "BaseBdev2", 00:17:40.147 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:40.147 "is_configured": true, 00:17:40.147 "data_offset": 256, 00:17:40.147 "data_size": 7936 00:17:40.147 } 00:17:40.147 ] 00:17:40.147 }' 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.147 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.147 [2024-10-05 08:54:16.522428] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:40.421 [2024-10-05 08:54:16.615032] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:40.421 [2024-10-05 08:54:16.615142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.421 [2024-10-05 08:54:16.615168] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:40.421 [2024-10-05 08:54:16.615178] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.421 "name": "raid_bdev1", 00:17:40.421 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:40.421 "strip_size_kb": 0, 00:17:40.421 "state": "online", 00:17:40.421 "raid_level": "raid1", 00:17:40.421 "superblock": true, 00:17:40.421 "num_base_bdevs": 2, 00:17:40.421 "num_base_bdevs_discovered": 1, 00:17:40.421 "num_base_bdevs_operational": 1, 00:17:40.421 "base_bdevs_list": [ 00:17:40.421 { 00:17:40.421 "name": null, 00:17:40.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.421 "is_configured": false, 00:17:40.421 "data_offset": 0, 00:17:40.421 "data_size": 7936 00:17:40.421 }, 00:17:40.421 { 00:17:40.421 "name": "BaseBdev2", 00:17:40.421 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:40.421 "is_configured": true, 00:17:40.421 "data_offset": 256, 00:17:40.421 "data_size": 7936 00:17:40.421 } 00:17:40.421 ] 00:17:40.421 }' 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.421 08:54:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.680 "name": "raid_bdev1", 00:17:40.680 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:40.680 "strip_size_kb": 0, 00:17:40.680 "state": "online", 00:17:40.680 "raid_level": "raid1", 00:17:40.680 "superblock": true, 00:17:40.680 "num_base_bdevs": 2, 00:17:40.680 "num_base_bdevs_discovered": 1, 00:17:40.680 "num_base_bdevs_operational": 1, 00:17:40.680 "base_bdevs_list": [ 00:17:40.680 { 00:17:40.680 "name": null, 00:17:40.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.680 "is_configured": false, 00:17:40.680 "data_offset": 0, 00:17:40.680 "data_size": 7936 00:17:40.680 }, 00:17:40.680 { 00:17:40.680 "name": "BaseBdev2", 00:17:40.680 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:40.680 "is_configured": true, 00:17:40.680 "data_offset": 256, 00:17:40.680 "data_size": 7936 00:17:40.680 } 00:17:40.680 ] 00:17:40.680 }' 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.680 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.940 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.940 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:40.940 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.940 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.940 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.940 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:40.940 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.940 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.940 [2024-10-05 08:54:17.174606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:40.940 [2024-10-05 08:54:17.174663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.940 [2024-10-05 08:54:17.174690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:40.940 [2024-10-05 08:54:17.174702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.940 [2024-10-05 08:54:17.175282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.940 [2024-10-05 08:54:17.175311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:40.940 [2024-10-05 08:54:17.175399] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:40.940 [2024-10-05 08:54:17.175414] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:40.940 [2024-10-05 08:54:17.175434] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:40.940 [2024-10-05 08:54:17.175446] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:40.940 BaseBdev1 00:17:40.940 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.940 08:54:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:41.877 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.877 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.877 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.877 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.877 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.877 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.877 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.877 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.877 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.877 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.877 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.878 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.878 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.878 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.878 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.878 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.878 "name": "raid_bdev1", 00:17:41.878 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:41.878 "strip_size_kb": 0, 00:17:41.878 "state": "online", 00:17:41.878 "raid_level": "raid1", 00:17:41.878 "superblock": true, 00:17:41.878 "num_base_bdevs": 2, 00:17:41.878 "num_base_bdevs_discovered": 1, 00:17:41.878 "num_base_bdevs_operational": 1, 00:17:41.878 "base_bdevs_list": [ 00:17:41.878 { 00:17:41.878 "name": null, 00:17:41.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.878 "is_configured": false, 00:17:41.878 "data_offset": 0, 00:17:41.878 "data_size": 7936 00:17:41.878 }, 00:17:41.878 { 00:17:41.878 "name": "BaseBdev2", 00:17:41.878 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:41.878 "is_configured": true, 00:17:41.878 "data_offset": 256, 00:17:41.878 "data_size": 7936 00:17:41.878 } 00:17:41.878 ] 00:17:41.878 }' 00:17:41.878 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.878 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.445 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.445 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.445 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.445 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.445 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.445 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.445 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.445 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.445 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.445 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.445 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.445 "name": "raid_bdev1", 00:17:42.445 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:42.445 "strip_size_kb": 0, 00:17:42.445 "state": "online", 00:17:42.445 "raid_level": "raid1", 00:17:42.445 "superblock": true, 00:17:42.445 "num_base_bdevs": 2, 00:17:42.446 "num_base_bdevs_discovered": 1, 00:17:42.446 "num_base_bdevs_operational": 1, 00:17:42.446 "base_bdevs_list": [ 00:17:42.446 { 00:17:42.446 "name": null, 00:17:42.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.446 "is_configured": false, 00:17:42.446 "data_offset": 0, 00:17:42.446 "data_size": 7936 00:17:42.446 }, 00:17:42.446 { 00:17:42.446 "name": "BaseBdev2", 00:17:42.446 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:42.446 "is_configured": true, 00:17:42.446 "data_offset": 256, 00:17:42.446 "data_size": 7936 00:17:42.446 } 00:17:42.446 ] 00:17:42.446 }' 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.446 [2024-10-05 08:54:18.780122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.446 [2024-10-05 08:54:18.780399] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:42.446 [2024-10-05 08:54:18.780423] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:42.446 request: 00:17:42.446 { 00:17:42.446 "base_bdev": "BaseBdev1", 00:17:42.446 "raid_bdev": "raid_bdev1", 00:17:42.446 "method": "bdev_raid_add_base_bdev", 00:17:42.446 "req_id": 1 00:17:42.446 } 00:17:42.446 Got JSON-RPC error response 00:17:42.446 response: 00:17:42.446 { 00:17:42.446 "code": -22, 00:17:42.446 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:42.446 } 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:42.446 08:54:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.386 "name": "raid_bdev1", 00:17:43.386 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:43.386 "strip_size_kb": 0, 00:17:43.386 "state": "online", 00:17:43.386 "raid_level": "raid1", 00:17:43.386 "superblock": true, 00:17:43.386 "num_base_bdevs": 2, 00:17:43.386 "num_base_bdevs_discovered": 1, 00:17:43.386 "num_base_bdevs_operational": 1, 00:17:43.386 "base_bdevs_list": [ 00:17:43.386 { 00:17:43.386 "name": null, 00:17:43.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.386 "is_configured": false, 00:17:43.386 "data_offset": 0, 00:17:43.386 "data_size": 7936 00:17:43.386 }, 00:17:43.386 { 00:17:43.386 "name": "BaseBdev2", 00:17:43.386 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:43.386 "is_configured": true, 00:17:43.386 "data_offset": 256, 00:17:43.386 "data_size": 7936 00:17:43.386 } 00:17:43.386 ] 00:17:43.386 }' 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.386 08:54:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.956 "name": "raid_bdev1", 00:17:43.956 "uuid": "cbe68a45-db8c-4afc-b33f-35639dd66aeb", 00:17:43.956 "strip_size_kb": 0, 00:17:43.956 "state": "online", 00:17:43.956 "raid_level": "raid1", 00:17:43.956 "superblock": true, 00:17:43.956 "num_base_bdevs": 2, 00:17:43.956 "num_base_bdevs_discovered": 1, 00:17:43.956 "num_base_bdevs_operational": 1, 00:17:43.956 "base_bdevs_list": [ 00:17:43.956 { 00:17:43.956 "name": null, 00:17:43.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.956 "is_configured": false, 00:17:43.956 "data_offset": 0, 00:17:43.956 "data_size": 7936 00:17:43.956 }, 00:17:43.956 { 00:17:43.956 "name": "BaseBdev2", 00:17:43.956 "uuid": "7ef54dc1-c449-5885-ab3c-2fc2a1b35d28", 00:17:43.956 "is_configured": true, 00:17:43.956 "data_offset": 256, 00:17:43.956 "data_size": 7936 00:17:43.956 } 00:17:43.956 ] 00:17:43.956 }' 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 82833 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 82833 ']' 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 82833 00:17:43.956 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:43.957 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.957 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82833 00:17:43.957 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.957 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.957 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82833' 00:17:43.957 killing process with pid 82833 00:17:43.957 Received shutdown signal, test time was about 60.000000 seconds 00:17:43.957 00:17:43.957 Latency(us) 00:17:43.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.957 =================================================================================================================== 00:17:43.957 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:43.957 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 82833 00:17:43.957 [2024-10-05 08:54:20.394638] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.957 08:54:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 82833 00:17:43.957 [2024-10-05 08:54:20.394787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.957 [2024-10-05 08:54:20.394849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.957 [2024-10-05 08:54:20.394864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:44.526 [2024-10-05 08:54:20.704141] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.908 08:54:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:45.908 ************************************ 00:17:45.908 END TEST raid_rebuild_test_sb_4k 00:17:45.908 ************************************ 00:17:45.908 00:17:45.908 real 0m19.970s 00:17:45.908 user 0m25.858s 00:17:45.908 sys 0m2.685s 00:17:45.908 08:54:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.908 08:54:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.908 08:54:22 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:45.909 08:54:22 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:45.909 08:54:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:45.909 08:54:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.909 08:54:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.909 ************************************ 00:17:45.909 START TEST raid_state_function_test_sb_md_separate 00:17:45.909 ************************************ 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:45.909 Process raid pid: 83409 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=83409 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83409' 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 83409 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 83409 ']' 00:17:45.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.909 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.909 [2024-10-05 08:54:22.174714] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:45.909 [2024-10-05 08:54:22.174880] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.909 [2024-10-05 08:54:22.340412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.169 [2024-10-05 08:54:22.583323] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.429 [2024-10-05 08:54:22.803412] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.429 [2024-10-05 08:54:22.803556] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.688 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.688 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:46.688 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:46.688 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.688 08:54:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.689 [2024-10-05 08:54:22.997851] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:46.689 [2024-10-05 08:54:22.998019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:46.689 [2024-10-05 08:54:22.998060] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.689 [2024-10-05 08:54:22.998093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.689 "name": "Existed_Raid", 00:17:46.689 "uuid": "02131b8c-7053-4183-9afa-1e92a44cbed5", 00:17:46.689 "strip_size_kb": 0, 00:17:46.689 "state": "configuring", 00:17:46.689 "raid_level": "raid1", 00:17:46.689 "superblock": true, 00:17:46.689 "num_base_bdevs": 2, 00:17:46.689 "num_base_bdevs_discovered": 0, 00:17:46.689 "num_base_bdevs_operational": 2, 00:17:46.689 "base_bdevs_list": [ 00:17:46.689 { 00:17:46.689 "name": "BaseBdev1", 00:17:46.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.689 "is_configured": false, 00:17:46.689 "data_offset": 0, 00:17:46.689 "data_size": 0 00:17:46.689 }, 00:17:46.689 { 00:17:46.689 "name": "BaseBdev2", 00:17:46.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.689 "is_configured": false, 00:17:46.689 "data_offset": 0, 00:17:46.689 "data_size": 0 00:17:46.689 } 00:17:46.689 ] 00:17:46.689 }' 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.689 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.259 [2024-10-05 08:54:23.429035] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:47.259 [2024-10-05 08:54:23.429137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.259 [2024-10-05 08:54:23.441052] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:47.259 [2024-10-05 08:54:23.441146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:47.259 [2024-10-05 08:54:23.441202] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.259 [2024-10-05 08:54:23.441234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.259 [2024-10-05 08:54:23.529089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.259 BaseBdev1 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.259 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.259 [ 00:17:47.259 { 00:17:47.259 "name": "BaseBdev1", 00:17:47.259 "aliases": [ 00:17:47.259 "3534f1d5-0a96-4c96-9ffc-c9efef556f57" 00:17:47.259 ], 00:17:47.259 "product_name": "Malloc disk", 00:17:47.259 "block_size": 4096, 00:17:47.259 "num_blocks": 8192, 00:17:47.259 "uuid": "3534f1d5-0a96-4c96-9ffc-c9efef556f57", 00:17:47.259 "md_size": 32, 00:17:47.259 "md_interleave": false, 00:17:47.259 "dif_type": 0, 00:17:47.259 "assigned_rate_limits": { 00:17:47.259 "rw_ios_per_sec": 0, 00:17:47.259 "rw_mbytes_per_sec": 0, 00:17:47.259 "r_mbytes_per_sec": 0, 00:17:47.259 "w_mbytes_per_sec": 0 00:17:47.259 }, 00:17:47.259 "claimed": true, 00:17:47.259 "claim_type": "exclusive_write", 00:17:47.259 "zoned": false, 00:17:47.259 "supported_io_types": { 00:17:47.259 "read": true, 00:17:47.259 "write": true, 00:17:47.259 "unmap": true, 00:17:47.259 "flush": true, 00:17:47.259 "reset": true, 00:17:47.259 "nvme_admin": false, 00:17:47.260 "nvme_io": false, 00:17:47.260 "nvme_io_md": false, 00:17:47.260 "write_zeroes": true, 00:17:47.260 "zcopy": true, 00:17:47.260 "get_zone_info": false, 00:17:47.260 "zone_management": false, 00:17:47.260 "zone_append": false, 00:17:47.260 "compare": false, 00:17:47.260 "compare_and_write": false, 00:17:47.260 "abort": true, 00:17:47.260 "seek_hole": false, 00:17:47.260 "seek_data": false, 00:17:47.260 "copy": true, 00:17:47.260 "nvme_iov_md": false 00:17:47.260 }, 00:17:47.260 "memory_domains": [ 00:17:47.260 { 00:17:47.260 "dma_device_id": "system", 00:17:47.260 "dma_device_type": 1 00:17:47.260 }, 00:17:47.260 { 00:17:47.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.260 "dma_device_type": 2 00:17:47.260 } 00:17:47.260 ], 00:17:47.260 "driver_specific": {} 00:17:47.260 } 00:17:47.260 ] 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.260 "name": "Existed_Raid", 00:17:47.260 "uuid": "145c76a4-f26e-49b2-a2a0-a87cfbfe0bf0", 00:17:47.260 "strip_size_kb": 0, 00:17:47.260 "state": "configuring", 00:17:47.260 "raid_level": "raid1", 00:17:47.260 "superblock": true, 00:17:47.260 "num_base_bdevs": 2, 00:17:47.260 "num_base_bdevs_discovered": 1, 00:17:47.260 "num_base_bdevs_operational": 2, 00:17:47.260 "base_bdevs_list": [ 00:17:47.260 { 00:17:47.260 "name": "BaseBdev1", 00:17:47.260 "uuid": "3534f1d5-0a96-4c96-9ffc-c9efef556f57", 00:17:47.260 "is_configured": true, 00:17:47.260 "data_offset": 256, 00:17:47.260 "data_size": 7936 00:17:47.260 }, 00:17:47.260 { 00:17:47.260 "name": "BaseBdev2", 00:17:47.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.260 "is_configured": false, 00:17:47.260 "data_offset": 0, 00:17:47.260 "data_size": 0 00:17:47.260 } 00:17:47.260 ] 00:17:47.260 }' 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.260 08:54:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.829 [2024-10-05 08:54:24.036265] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:47.829 [2024-10-05 08:54:24.036315] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.829 [2024-10-05 08:54:24.044322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.829 [2024-10-05 08:54:24.046307] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.829 [2024-10-05 08:54:24.046362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.829 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.829 "name": "Existed_Raid", 00:17:47.829 "uuid": "854efd4e-c76e-45ef-b7c7-f722b8cd7b6d", 00:17:47.829 "strip_size_kb": 0, 00:17:47.829 "state": "configuring", 00:17:47.829 "raid_level": "raid1", 00:17:47.829 "superblock": true, 00:17:47.829 "num_base_bdevs": 2, 00:17:47.829 "num_base_bdevs_discovered": 1, 00:17:47.829 "num_base_bdevs_operational": 2, 00:17:47.829 "base_bdevs_list": [ 00:17:47.829 { 00:17:47.829 "name": "BaseBdev1", 00:17:47.830 "uuid": "3534f1d5-0a96-4c96-9ffc-c9efef556f57", 00:17:47.830 "is_configured": true, 00:17:47.830 "data_offset": 256, 00:17:47.830 "data_size": 7936 00:17:47.830 }, 00:17:47.830 { 00:17:47.830 "name": "BaseBdev2", 00:17:47.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.830 "is_configured": false, 00:17:47.830 "data_offset": 0, 00:17:47.830 "data_size": 0 00:17:47.830 } 00:17:47.830 ] 00:17:47.830 }' 00:17:47.830 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.830 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.089 [2024-10-05 08:54:24.520497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.089 [2024-10-05 08:54:24.520922] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:48.089 [2024-10-05 08:54:24.521004] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:48.089 [2024-10-05 08:54:24.521153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:48.089 [2024-10-05 08:54:24.521342] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:48.089 [2024-10-05 08:54:24.521393] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:48.089 [2024-10-05 08:54:24.521576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.089 BaseBdev2 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.089 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.090 [ 00:17:48.090 { 00:17:48.090 "name": "BaseBdev2", 00:17:48.090 "aliases": [ 00:17:48.090 "eeee0f14-915e-40a8-9676-4d4700c24dbd" 00:17:48.090 ], 00:17:48.090 "product_name": "Malloc disk", 00:17:48.090 "block_size": 4096, 00:17:48.090 "num_blocks": 8192, 00:17:48.090 "uuid": "eeee0f14-915e-40a8-9676-4d4700c24dbd", 00:17:48.090 "md_size": 32, 00:17:48.090 "md_interleave": false, 00:17:48.090 "dif_type": 0, 00:17:48.090 "assigned_rate_limits": { 00:17:48.090 "rw_ios_per_sec": 0, 00:17:48.090 "rw_mbytes_per_sec": 0, 00:17:48.090 "r_mbytes_per_sec": 0, 00:17:48.090 "w_mbytes_per_sec": 0 00:17:48.090 }, 00:17:48.090 "claimed": true, 00:17:48.090 "claim_type": "exclusive_write", 00:17:48.090 "zoned": false, 00:17:48.090 "supported_io_types": { 00:17:48.090 "read": true, 00:17:48.090 "write": true, 00:17:48.090 "unmap": true, 00:17:48.090 "flush": true, 00:17:48.090 "reset": true, 00:17:48.090 "nvme_admin": false, 00:17:48.090 "nvme_io": false, 00:17:48.090 "nvme_io_md": false, 00:17:48.090 "write_zeroes": true, 00:17:48.090 "zcopy": true, 00:17:48.090 "get_zone_info": false, 00:17:48.090 "zone_management": false, 00:17:48.090 "zone_append": false, 00:17:48.090 "compare": false, 00:17:48.090 "compare_and_write": false, 00:17:48.090 "abort": true, 00:17:48.090 "seek_hole": false, 00:17:48.090 "seek_data": false, 00:17:48.090 "copy": true, 00:17:48.090 "nvme_iov_md": false 00:17:48.090 }, 00:17:48.090 "memory_domains": [ 00:17:48.090 { 00:17:48.090 "dma_device_id": "system", 00:17:48.090 "dma_device_type": 1 00:17:48.350 }, 00:17:48.350 { 00:17:48.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.350 "dma_device_type": 2 00:17:48.350 } 00:17:48.350 ], 00:17:48.350 "driver_specific": {} 00:17:48.350 } 00:17:48.350 ] 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.350 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.350 "name": "Existed_Raid", 00:17:48.351 "uuid": "854efd4e-c76e-45ef-b7c7-f722b8cd7b6d", 00:17:48.351 "strip_size_kb": 0, 00:17:48.351 "state": "online", 00:17:48.351 "raid_level": "raid1", 00:17:48.351 "superblock": true, 00:17:48.351 "num_base_bdevs": 2, 00:17:48.351 "num_base_bdevs_discovered": 2, 00:17:48.351 "num_base_bdevs_operational": 2, 00:17:48.351 "base_bdevs_list": [ 00:17:48.351 { 00:17:48.351 "name": "BaseBdev1", 00:17:48.351 "uuid": "3534f1d5-0a96-4c96-9ffc-c9efef556f57", 00:17:48.351 "is_configured": true, 00:17:48.351 "data_offset": 256, 00:17:48.351 "data_size": 7936 00:17:48.351 }, 00:17:48.351 { 00:17:48.351 "name": "BaseBdev2", 00:17:48.351 "uuid": "eeee0f14-915e-40a8-9676-4d4700c24dbd", 00:17:48.351 "is_configured": true, 00:17:48.351 "data_offset": 256, 00:17:48.351 "data_size": 7936 00:17:48.351 } 00:17:48.351 ] 00:17:48.351 }' 00:17:48.351 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.351 08:54:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.611 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:48.611 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:48.611 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:48.611 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:48.611 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:48.611 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:48.611 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:48.611 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:48.611 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.611 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.611 [2024-10-05 08:54:25.012061] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.611 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.611 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:48.611 "name": "Existed_Raid", 00:17:48.611 "aliases": [ 00:17:48.611 "854efd4e-c76e-45ef-b7c7-f722b8cd7b6d" 00:17:48.611 ], 00:17:48.611 "product_name": "Raid Volume", 00:17:48.611 "block_size": 4096, 00:17:48.611 "num_blocks": 7936, 00:17:48.611 "uuid": "854efd4e-c76e-45ef-b7c7-f722b8cd7b6d", 00:17:48.611 "md_size": 32, 00:17:48.611 "md_interleave": false, 00:17:48.611 "dif_type": 0, 00:17:48.611 "assigned_rate_limits": { 00:17:48.611 "rw_ios_per_sec": 0, 00:17:48.611 "rw_mbytes_per_sec": 0, 00:17:48.611 "r_mbytes_per_sec": 0, 00:17:48.611 "w_mbytes_per_sec": 0 00:17:48.611 }, 00:17:48.611 "claimed": false, 00:17:48.611 "zoned": false, 00:17:48.611 "supported_io_types": { 00:17:48.611 "read": true, 00:17:48.611 "write": true, 00:17:48.611 "unmap": false, 00:17:48.611 "flush": false, 00:17:48.611 "reset": true, 00:17:48.611 "nvme_admin": false, 00:17:48.611 "nvme_io": false, 00:17:48.611 "nvme_io_md": false, 00:17:48.611 "write_zeroes": true, 00:17:48.611 "zcopy": false, 00:17:48.611 "get_zone_info": false, 00:17:48.611 "zone_management": false, 00:17:48.611 "zone_append": false, 00:17:48.611 "compare": false, 00:17:48.611 "compare_and_write": false, 00:17:48.611 "abort": false, 00:17:48.611 "seek_hole": false, 00:17:48.611 "seek_data": false, 00:17:48.611 "copy": false, 00:17:48.611 "nvme_iov_md": false 00:17:48.611 }, 00:17:48.611 "memory_domains": [ 00:17:48.611 { 00:17:48.611 "dma_device_id": "system", 00:17:48.611 "dma_device_type": 1 00:17:48.611 }, 00:17:48.611 { 00:17:48.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.611 "dma_device_type": 2 00:17:48.611 }, 00:17:48.611 { 00:17:48.611 "dma_device_id": "system", 00:17:48.611 "dma_device_type": 1 00:17:48.611 }, 00:17:48.611 { 00:17:48.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.611 "dma_device_type": 2 00:17:48.611 } 00:17:48.611 ], 00:17:48.611 "driver_specific": { 00:17:48.611 "raid": { 00:17:48.611 "uuid": "854efd4e-c76e-45ef-b7c7-f722b8cd7b6d", 00:17:48.611 "strip_size_kb": 0, 00:17:48.611 "state": "online", 00:17:48.611 "raid_level": "raid1", 00:17:48.611 "superblock": true, 00:17:48.611 "num_base_bdevs": 2, 00:17:48.611 "num_base_bdevs_discovered": 2, 00:17:48.611 "num_base_bdevs_operational": 2, 00:17:48.611 "base_bdevs_list": [ 00:17:48.611 { 00:17:48.611 "name": "BaseBdev1", 00:17:48.611 "uuid": "3534f1d5-0a96-4c96-9ffc-c9efef556f57", 00:17:48.611 "is_configured": true, 00:17:48.611 "data_offset": 256, 00:17:48.611 "data_size": 7936 00:17:48.611 }, 00:17:48.611 { 00:17:48.611 "name": "BaseBdev2", 00:17:48.611 "uuid": "eeee0f14-915e-40a8-9676-4d4700c24dbd", 00:17:48.611 "is_configured": true, 00:17:48.611 "data_offset": 256, 00:17:48.611 "data_size": 7936 00:17:48.611 } 00:17:48.612 ] 00:17:48.612 } 00:17:48.612 } 00:17:48.612 }' 00:17:48.612 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:48.612 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:48.612 BaseBdev2' 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.872 [2024-10-05 08:54:25.227489] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.872 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.132 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.132 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.132 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.132 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.132 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.132 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.132 "name": "Existed_Raid", 00:17:49.132 "uuid": "854efd4e-c76e-45ef-b7c7-f722b8cd7b6d", 00:17:49.132 "strip_size_kb": 0, 00:17:49.132 "state": "online", 00:17:49.132 "raid_level": "raid1", 00:17:49.132 "superblock": true, 00:17:49.132 "num_base_bdevs": 2, 00:17:49.132 "num_base_bdevs_discovered": 1, 00:17:49.132 "num_base_bdevs_operational": 1, 00:17:49.132 "base_bdevs_list": [ 00:17:49.132 { 00:17:49.132 "name": null, 00:17:49.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.132 "is_configured": false, 00:17:49.132 "data_offset": 0, 00:17:49.132 "data_size": 7936 00:17:49.132 }, 00:17:49.132 { 00:17:49.132 "name": "BaseBdev2", 00:17:49.132 "uuid": "eeee0f14-915e-40a8-9676-4d4700c24dbd", 00:17:49.132 "is_configured": true, 00:17:49.132 "data_offset": 256, 00:17:49.132 "data_size": 7936 00:17:49.132 } 00:17:49.132 ] 00:17:49.132 }' 00:17:49.132 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.132 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.391 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:49.391 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:49.391 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:49.391 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.391 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.391 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.391 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.391 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:49.391 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.391 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:49.391 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.391 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.391 [2024-10-05 08:54:25.857704] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:49.391 [2024-10-05 08:54:25.857839] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.651 [2024-10-05 08:54:25.964411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.651 [2024-10-05 08:54:25.964478] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.651 [2024-10-05 08:54:25.964492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:49.651 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.651 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:49.651 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:49.651 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.651 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.651 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:49.651 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.651 08:54:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 83409 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 83409 ']' 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 83409 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83409 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:49.651 killing process with pid 83409 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83409' 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 83409 00:17:49.651 [2024-10-05 08:54:26.061850] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:49.651 08:54:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 83409 00:17:49.651 [2024-10-05 08:54:26.078588] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.034 08:54:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:51.034 00:17:51.034 real 0m5.321s 00:17:51.034 user 0m7.374s 00:17:51.034 sys 0m1.020s 00:17:51.034 ************************************ 00:17:51.034 END TEST raid_state_function_test_sb_md_separate 00:17:51.034 ************************************ 00:17:51.034 08:54:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.034 08:54:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.034 08:54:27 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:51.034 08:54:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:51.034 08:54:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.034 08:54:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:51.034 ************************************ 00:17:51.034 START TEST raid_superblock_test_md_separate 00:17:51.034 ************************************ 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:51.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=83626 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 83626 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 83626 ']' 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.034 08:54:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.295 [2024-10-05 08:54:27.570570] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:51.295 [2024-10-05 08:54:27.570693] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83626 ] 00:17:51.295 [2024-10-05 08:54:27.719093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.555 [2024-10-05 08:54:27.956706] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.816 [2024-10-05 08:54:28.185383] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.816 [2024-10-05 08:54:28.185438] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.075 malloc1 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.075 [2024-10-05 08:54:28.452739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:52.075 [2024-10-05 08:54:28.452929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.075 [2024-10-05 08:54:28.453014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:52.075 [2024-10-05 08:54:28.453059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.075 [2024-10-05 08:54:28.455292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.075 [2024-10-05 08:54:28.455395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:52.075 pt1 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.075 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.075 malloc2 00:17:52.076 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.076 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:52.076 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.076 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.335 [2024-10-05 08:54:28.547094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:52.335 [2024-10-05 08:54:28.547179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.335 [2024-10-05 08:54:28.547210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:52.335 [2024-10-05 08:54:28.547221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.335 [2024-10-05 08:54:28.549425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.335 [2024-10-05 08:54:28.549467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:52.335 pt2 00:17:52.335 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.335 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:52.335 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:52.335 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:52.335 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.336 [2024-10-05 08:54:28.559162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:52.336 [2024-10-05 08:54:28.561269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.336 [2024-10-05 08:54:28.561468] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:52.336 [2024-10-05 08:54:28.561483] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:52.336 [2024-10-05 08:54:28.561572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:52.336 [2024-10-05 08:54:28.561720] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:52.336 [2024-10-05 08:54:28.561733] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:52.336 [2024-10-05 08:54:28.561870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.336 "name": "raid_bdev1", 00:17:52.336 "uuid": "9c531e49-1acf-449b-9ae6-27f1c6296437", 00:17:52.336 "strip_size_kb": 0, 00:17:52.336 "state": "online", 00:17:52.336 "raid_level": "raid1", 00:17:52.336 "superblock": true, 00:17:52.336 "num_base_bdevs": 2, 00:17:52.336 "num_base_bdevs_discovered": 2, 00:17:52.336 "num_base_bdevs_operational": 2, 00:17:52.336 "base_bdevs_list": [ 00:17:52.336 { 00:17:52.336 "name": "pt1", 00:17:52.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:52.336 "is_configured": true, 00:17:52.336 "data_offset": 256, 00:17:52.336 "data_size": 7936 00:17:52.336 }, 00:17:52.336 { 00:17:52.336 "name": "pt2", 00:17:52.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.336 "is_configured": true, 00:17:52.336 "data_offset": 256, 00:17:52.336 "data_size": 7936 00:17:52.336 } 00:17:52.336 ] 00:17:52.336 }' 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.336 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.596 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:52.596 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:52.596 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:52.596 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:52.596 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:52.596 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:52.596 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.596 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.596 08:54:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.596 08:54:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:52.596 [2024-10-05 08:54:28.994631] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.596 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.596 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:52.596 "name": "raid_bdev1", 00:17:52.596 "aliases": [ 00:17:52.596 "9c531e49-1acf-449b-9ae6-27f1c6296437" 00:17:52.596 ], 00:17:52.596 "product_name": "Raid Volume", 00:17:52.596 "block_size": 4096, 00:17:52.596 "num_blocks": 7936, 00:17:52.596 "uuid": "9c531e49-1acf-449b-9ae6-27f1c6296437", 00:17:52.596 "md_size": 32, 00:17:52.596 "md_interleave": false, 00:17:52.596 "dif_type": 0, 00:17:52.596 "assigned_rate_limits": { 00:17:52.596 "rw_ios_per_sec": 0, 00:17:52.596 "rw_mbytes_per_sec": 0, 00:17:52.596 "r_mbytes_per_sec": 0, 00:17:52.596 "w_mbytes_per_sec": 0 00:17:52.596 }, 00:17:52.596 "claimed": false, 00:17:52.596 "zoned": false, 00:17:52.596 "supported_io_types": { 00:17:52.596 "read": true, 00:17:52.596 "write": true, 00:17:52.596 "unmap": false, 00:17:52.596 "flush": false, 00:17:52.596 "reset": true, 00:17:52.596 "nvme_admin": false, 00:17:52.596 "nvme_io": false, 00:17:52.596 "nvme_io_md": false, 00:17:52.596 "write_zeroes": true, 00:17:52.596 "zcopy": false, 00:17:52.596 "get_zone_info": false, 00:17:52.596 "zone_management": false, 00:17:52.596 "zone_append": false, 00:17:52.596 "compare": false, 00:17:52.596 "compare_and_write": false, 00:17:52.596 "abort": false, 00:17:52.596 "seek_hole": false, 00:17:52.596 "seek_data": false, 00:17:52.596 "copy": false, 00:17:52.596 "nvme_iov_md": false 00:17:52.596 }, 00:17:52.596 "memory_domains": [ 00:17:52.596 { 00:17:52.596 "dma_device_id": "system", 00:17:52.596 "dma_device_type": 1 00:17:52.596 }, 00:17:52.596 { 00:17:52.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.596 "dma_device_type": 2 00:17:52.596 }, 00:17:52.596 { 00:17:52.596 "dma_device_id": "system", 00:17:52.596 "dma_device_type": 1 00:17:52.596 }, 00:17:52.596 { 00:17:52.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.596 "dma_device_type": 2 00:17:52.596 } 00:17:52.596 ], 00:17:52.596 "driver_specific": { 00:17:52.596 "raid": { 00:17:52.596 "uuid": "9c531e49-1acf-449b-9ae6-27f1c6296437", 00:17:52.596 "strip_size_kb": 0, 00:17:52.596 "state": "online", 00:17:52.596 "raid_level": "raid1", 00:17:52.596 "superblock": true, 00:17:52.596 "num_base_bdevs": 2, 00:17:52.596 "num_base_bdevs_discovered": 2, 00:17:52.596 "num_base_bdevs_operational": 2, 00:17:52.596 "base_bdevs_list": [ 00:17:52.596 { 00:17:52.596 "name": "pt1", 00:17:52.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:52.596 "is_configured": true, 00:17:52.596 "data_offset": 256, 00:17:52.596 "data_size": 7936 00:17:52.596 }, 00:17:52.596 { 00:17:52.596 "name": "pt2", 00:17:52.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.596 "is_configured": true, 00:17:52.596 "data_offset": 256, 00:17:52.596 "data_size": 7936 00:17:52.596 } 00:17:52.596 ] 00:17:52.596 } 00:17:52.596 } 00:17:52.596 }' 00:17:52.596 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.856 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:52.857 pt2' 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.857 [2024-10-05 08:54:29.242113] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9c531e49-1acf-449b-9ae6-27f1c6296437 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 9c531e49-1acf-449b-9ae6-27f1c6296437 ']' 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.857 [2024-10-05 08:54:29.269822] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.857 [2024-10-05 08:54:29.269848] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.857 [2024-10-05 08:54:29.269927] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.857 [2024-10-05 08:54:29.270003] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.857 [2024-10-05 08:54:29.270019] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:52.857 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.118 [2024-10-05 08:54:29.413620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:53.118 [2024-10-05 08:54:29.415663] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:53.118 [2024-10-05 08:54:29.415807] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:53.118 [2024-10-05 08:54:29.415917] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:53.118 [2024-10-05 08:54:29.415994] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.118 [2024-10-05 08:54:29.416040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:53.118 request: 00:17:53.118 { 00:17:53.118 "name": "raid_bdev1", 00:17:53.118 "raid_level": "raid1", 00:17:53.118 "base_bdevs": [ 00:17:53.118 "malloc1", 00:17:53.118 "malloc2" 00:17:53.118 ], 00:17:53.118 "superblock": false, 00:17:53.118 "method": "bdev_raid_create", 00:17:53.118 "req_id": 1 00:17:53.118 } 00:17:53.118 Got JSON-RPC error response 00:17:53.118 response: 00:17:53.118 { 00:17:53.118 "code": -17, 00:17:53.118 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:53.118 } 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.118 [2024-10-05 08:54:29.481454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.118 [2024-10-05 08:54:29.481575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.118 [2024-10-05 08:54:29.481612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:53.118 [2024-10-05 08:54:29.481665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.118 [2024-10-05 08:54:29.483883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.118 [2024-10-05 08:54:29.483972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.118 [2024-10-05 08:54:29.484063] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:53.118 [2024-10-05 08:54:29.484146] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:53.118 pt1 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.118 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.118 "name": "raid_bdev1", 00:17:53.118 "uuid": "9c531e49-1acf-449b-9ae6-27f1c6296437", 00:17:53.118 "strip_size_kb": 0, 00:17:53.118 "state": "configuring", 00:17:53.118 "raid_level": "raid1", 00:17:53.118 "superblock": true, 00:17:53.118 "num_base_bdevs": 2, 00:17:53.118 "num_base_bdevs_discovered": 1, 00:17:53.118 "num_base_bdevs_operational": 2, 00:17:53.118 "base_bdevs_list": [ 00:17:53.118 { 00:17:53.118 "name": "pt1", 00:17:53.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.118 "is_configured": true, 00:17:53.118 "data_offset": 256, 00:17:53.118 "data_size": 7936 00:17:53.119 }, 00:17:53.119 { 00:17:53.119 "name": null, 00:17:53.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.119 "is_configured": false, 00:17:53.119 "data_offset": 256, 00:17:53.119 "data_size": 7936 00:17:53.119 } 00:17:53.119 ] 00:17:53.119 }' 00:17:53.119 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.119 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.692 [2024-10-05 08:54:29.964921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.692 [2024-10-05 08:54:29.965079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.692 [2024-10-05 08:54:29.965124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:53.692 [2024-10-05 08:54:29.965163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.692 [2024-10-05 08:54:29.965455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.692 [2024-10-05 08:54:29.965524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.692 [2024-10-05 08:54:29.965615] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:53.692 [2024-10-05 08:54:29.965672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.692 [2024-10-05 08:54:29.965830] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:53.692 [2024-10-05 08:54:29.965877] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:53.692 [2024-10-05 08:54:29.966005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:53.692 [2024-10-05 08:54:29.966185] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:53.692 [2024-10-05 08:54:29.966228] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:53.692 [2024-10-05 08:54:29.966384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.692 pt2 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.692 08:54:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.692 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.692 "name": "raid_bdev1", 00:17:53.692 "uuid": "9c531e49-1acf-449b-9ae6-27f1c6296437", 00:17:53.692 "strip_size_kb": 0, 00:17:53.692 "state": "online", 00:17:53.692 "raid_level": "raid1", 00:17:53.692 "superblock": true, 00:17:53.692 "num_base_bdevs": 2, 00:17:53.692 "num_base_bdevs_discovered": 2, 00:17:53.692 "num_base_bdevs_operational": 2, 00:17:53.692 "base_bdevs_list": [ 00:17:53.692 { 00:17:53.692 "name": "pt1", 00:17:53.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.692 "is_configured": true, 00:17:53.692 "data_offset": 256, 00:17:53.692 "data_size": 7936 00:17:53.692 }, 00:17:53.692 { 00:17:53.692 "name": "pt2", 00:17:53.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.692 "is_configured": true, 00:17:53.692 "data_offset": 256, 00:17:53.692 "data_size": 7936 00:17:53.692 } 00:17:53.692 ] 00:17:53.692 }' 00:17:53.692 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.692 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:53.950 [2024-10-05 08:54:30.352477] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:53.950 "name": "raid_bdev1", 00:17:53.950 "aliases": [ 00:17:53.950 "9c531e49-1acf-449b-9ae6-27f1c6296437" 00:17:53.950 ], 00:17:53.950 "product_name": "Raid Volume", 00:17:53.950 "block_size": 4096, 00:17:53.950 "num_blocks": 7936, 00:17:53.950 "uuid": "9c531e49-1acf-449b-9ae6-27f1c6296437", 00:17:53.950 "md_size": 32, 00:17:53.950 "md_interleave": false, 00:17:53.950 "dif_type": 0, 00:17:53.950 "assigned_rate_limits": { 00:17:53.950 "rw_ios_per_sec": 0, 00:17:53.950 "rw_mbytes_per_sec": 0, 00:17:53.950 "r_mbytes_per_sec": 0, 00:17:53.950 "w_mbytes_per_sec": 0 00:17:53.950 }, 00:17:53.950 "claimed": false, 00:17:53.950 "zoned": false, 00:17:53.950 "supported_io_types": { 00:17:53.950 "read": true, 00:17:53.950 "write": true, 00:17:53.950 "unmap": false, 00:17:53.950 "flush": false, 00:17:53.950 "reset": true, 00:17:53.950 "nvme_admin": false, 00:17:53.950 "nvme_io": false, 00:17:53.950 "nvme_io_md": false, 00:17:53.950 "write_zeroes": true, 00:17:53.950 "zcopy": false, 00:17:53.950 "get_zone_info": false, 00:17:53.950 "zone_management": false, 00:17:53.950 "zone_append": false, 00:17:53.950 "compare": false, 00:17:53.950 "compare_and_write": false, 00:17:53.950 "abort": false, 00:17:53.950 "seek_hole": false, 00:17:53.950 "seek_data": false, 00:17:53.950 "copy": false, 00:17:53.950 "nvme_iov_md": false 00:17:53.950 }, 00:17:53.950 "memory_domains": [ 00:17:53.950 { 00:17:53.950 "dma_device_id": "system", 00:17:53.950 "dma_device_type": 1 00:17:53.950 }, 00:17:53.950 { 00:17:53.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.950 "dma_device_type": 2 00:17:53.950 }, 00:17:53.950 { 00:17:53.950 "dma_device_id": "system", 00:17:53.950 "dma_device_type": 1 00:17:53.950 }, 00:17:53.950 { 00:17:53.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.950 "dma_device_type": 2 00:17:53.950 } 00:17:53.950 ], 00:17:53.950 "driver_specific": { 00:17:53.950 "raid": { 00:17:53.950 "uuid": "9c531e49-1acf-449b-9ae6-27f1c6296437", 00:17:53.950 "strip_size_kb": 0, 00:17:53.950 "state": "online", 00:17:53.950 "raid_level": "raid1", 00:17:53.950 "superblock": true, 00:17:53.950 "num_base_bdevs": 2, 00:17:53.950 "num_base_bdevs_discovered": 2, 00:17:53.950 "num_base_bdevs_operational": 2, 00:17:53.950 "base_bdevs_list": [ 00:17:53.950 { 00:17:53.950 "name": "pt1", 00:17:53.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.950 "is_configured": true, 00:17:53.950 "data_offset": 256, 00:17:53.950 "data_size": 7936 00:17:53.950 }, 00:17:53.950 { 00:17:53.950 "name": "pt2", 00:17:53.950 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.950 "is_configured": true, 00:17:53.950 "data_offset": 256, 00:17:53.950 "data_size": 7936 00:17:53.950 } 00:17:53.950 ] 00:17:53.950 } 00:17:53.950 } 00:17:53.950 }' 00:17:53.950 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:54.210 pt2' 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.210 [2024-10-05 08:54:30.588126] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 9c531e49-1acf-449b-9ae6-27f1c6296437 '!=' 9c531e49-1acf-449b-9ae6-27f1c6296437 ']' 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.210 [2024-10-05 08:54:30.619891] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.210 "name": "raid_bdev1", 00:17:54.210 "uuid": "9c531e49-1acf-449b-9ae6-27f1c6296437", 00:17:54.210 "strip_size_kb": 0, 00:17:54.210 "state": "online", 00:17:54.210 "raid_level": "raid1", 00:17:54.210 "superblock": true, 00:17:54.210 "num_base_bdevs": 2, 00:17:54.210 "num_base_bdevs_discovered": 1, 00:17:54.210 "num_base_bdevs_operational": 1, 00:17:54.210 "base_bdevs_list": [ 00:17:54.210 { 00:17:54.210 "name": null, 00:17:54.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.210 "is_configured": false, 00:17:54.210 "data_offset": 0, 00:17:54.210 "data_size": 7936 00:17:54.210 }, 00:17:54.210 { 00:17:54.210 "name": "pt2", 00:17:54.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.210 "is_configured": true, 00:17:54.210 "data_offset": 256, 00:17:54.210 "data_size": 7936 00:17:54.210 } 00:17:54.210 ] 00:17:54.210 }' 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.210 08:54:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.785 [2024-10-05 08:54:31.067070] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:54.785 [2024-10-05 08:54:31.067150] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:54.785 [2024-10-05 08:54:31.067241] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.785 [2024-10-05 08:54:31.067324] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.785 [2024-10-05 08:54:31.067376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.785 [2024-10-05 08:54:31.138935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:54.785 [2024-10-05 08:54:31.139013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.785 [2024-10-05 08:54:31.139032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:54.785 [2024-10-05 08:54:31.139046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.785 [2024-10-05 08:54:31.141133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.785 [2024-10-05 08:54:31.141178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:54.785 [2024-10-05 08:54:31.141256] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:54.785 [2024-10-05 08:54:31.141316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.785 [2024-10-05 08:54:31.141419] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:54.785 [2024-10-05 08:54:31.141433] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:54.785 [2024-10-05 08:54:31.141517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:54.785 [2024-10-05 08:54:31.141635] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:54.785 [2024-10-05 08:54:31.141644] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:54.785 [2024-10-05 08:54:31.141750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.785 pt2 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.785 "name": "raid_bdev1", 00:17:54.785 "uuid": "9c531e49-1acf-449b-9ae6-27f1c6296437", 00:17:54.785 "strip_size_kb": 0, 00:17:54.785 "state": "online", 00:17:54.785 "raid_level": "raid1", 00:17:54.785 "superblock": true, 00:17:54.785 "num_base_bdevs": 2, 00:17:54.785 "num_base_bdevs_discovered": 1, 00:17:54.785 "num_base_bdevs_operational": 1, 00:17:54.785 "base_bdevs_list": [ 00:17:54.785 { 00:17:54.785 "name": null, 00:17:54.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.785 "is_configured": false, 00:17:54.785 "data_offset": 256, 00:17:54.785 "data_size": 7936 00:17:54.785 }, 00:17:54.785 { 00:17:54.785 "name": "pt2", 00:17:54.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.785 "is_configured": true, 00:17:54.785 "data_offset": 256, 00:17:54.785 "data_size": 7936 00:17:54.785 } 00:17:54.785 ] 00:17:54.785 }' 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.785 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.396 [2024-10-05 08:54:31.618058] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.396 [2024-10-05 08:54:31.618151] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.396 [2024-10-05 08:54:31.618247] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.396 [2024-10-05 08:54:31.618315] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.396 [2024-10-05 08:54:31.618370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.396 [2024-10-05 08:54:31.662074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:55.396 [2024-10-05 08:54:31.662170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.396 [2024-10-05 08:54:31.662225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:55.396 [2024-10-05 08:54:31.662287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.396 [2024-10-05 08:54:31.664530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.396 [2024-10-05 08:54:31.664619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:55.396 [2024-10-05 08:54:31.664717] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:55.396 [2024-10-05 08:54:31.664786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:55.396 [2024-10-05 08:54:31.664996] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:55.396 [2024-10-05 08:54:31.665059] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.396 [2024-10-05 08:54:31.665107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:55.396 [2024-10-05 08:54:31.665236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:55.396 [2024-10-05 08:54:31.665353] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:55.396 [2024-10-05 08:54:31.665395] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:55.396 [2024-10-05 08:54:31.665493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:55.396 [2024-10-05 08:54:31.665649] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:55.396 [2024-10-05 08:54:31.665693] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:55.396 [2024-10-05 08:54:31.665850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.396 pt1 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.396 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.396 "name": "raid_bdev1", 00:17:55.396 "uuid": "9c531e49-1acf-449b-9ae6-27f1c6296437", 00:17:55.396 "strip_size_kb": 0, 00:17:55.396 "state": "online", 00:17:55.396 "raid_level": "raid1", 00:17:55.396 "superblock": true, 00:17:55.396 "num_base_bdevs": 2, 00:17:55.396 "num_base_bdevs_discovered": 1, 00:17:55.396 "num_base_bdevs_operational": 1, 00:17:55.396 "base_bdevs_list": [ 00:17:55.396 { 00:17:55.396 "name": null, 00:17:55.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.397 "is_configured": false, 00:17:55.397 "data_offset": 256, 00:17:55.397 "data_size": 7936 00:17:55.397 }, 00:17:55.397 { 00:17:55.397 "name": "pt2", 00:17:55.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.397 "is_configured": true, 00:17:55.397 "data_offset": 256, 00:17:55.397 "data_size": 7936 00:17:55.397 } 00:17:55.397 ] 00:17:55.397 }' 00:17:55.397 08:54:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.397 08:54:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.657 08:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:55.657 08:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:55.657 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.657 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.657 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.916 08:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:55.916 08:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.916 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:55.917 [2024-10-05 08:54:32.169524] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 9c531e49-1acf-449b-9ae6-27f1c6296437 '!=' 9c531e49-1acf-449b-9ae6-27f1c6296437 ']' 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 83626 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 83626 ']' 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 83626 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83626 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:55.917 killing process with pid 83626 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83626' 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 83626 00:17:55.917 [2024-10-05 08:54:32.249684] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.917 [2024-10-05 08:54:32.249765] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.917 [2024-10-05 08:54:32.249809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.917 [2024-10-05 08:54:32.249825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:55.917 08:54:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 83626 00:17:56.176 [2024-10-05 08:54:32.477191] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:57.557 08:54:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:57.557 00:17:57.557 real 0m6.314s 00:17:57.557 user 0m9.210s 00:17:57.557 sys 0m1.268s 00:17:57.557 08:54:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.557 ************************************ 00:17:57.557 END TEST raid_superblock_test_md_separate 00:17:57.557 ************************************ 00:17:57.557 08:54:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.557 08:54:33 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:57.557 08:54:33 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:57.557 08:54:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:57.557 08:54:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.557 08:54:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.557 ************************************ 00:17:57.557 START TEST raid_rebuild_test_sb_md_separate 00:17:57.557 ************************************ 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=83920 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 83920 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 83920 ']' 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.557 08:54:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.557 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:57.557 Zero copy mechanism will not be used. 00:17:57.557 [2024-10-05 08:54:33.972552] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:17:57.557 [2024-10-05 08:54:33.972659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83920 ] 00:17:57.818 [2024-10-05 08:54:34.135543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.078 [2024-10-05 08:54:34.381533] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.338 [2024-10-05 08:54:34.612304] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.338 [2024-10-05 08:54:34.612349] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.338 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.338 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:58.338 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:58.338 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:58.338 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.338 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.598 BaseBdev1_malloc 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.598 [2024-10-05 08:54:34.852026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:58.598 [2024-10-05 08:54:34.852122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.598 [2024-10-05 08:54:34.852152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:58.598 [2024-10-05 08:54:34.852168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.598 [2024-10-05 08:54:34.854385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.598 [2024-10-05 08:54:34.854508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:58.598 BaseBdev1 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.598 BaseBdev2_malloc 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.598 [2024-10-05 08:54:34.946327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:58.598 [2024-10-05 08:54:34.946417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.598 [2024-10-05 08:54:34.946441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:58.598 [2024-10-05 08:54:34.946456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.598 [2024-10-05 08:54:34.948671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.598 [2024-10-05 08:54:34.948809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:58.598 BaseBdev2 00:17:58.598 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:58.599 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 08:54:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 spare_malloc 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 spare_delay 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 [2024-10-05 08:54:35.023549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:58.599 [2024-10-05 08:54:35.023637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.599 [2024-10-05 08:54:35.023660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:58.599 [2024-10-05 08:54:35.023674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.599 [2024-10-05 08:54:35.025892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.599 [2024-10-05 08:54:35.025939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:58.599 spare 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 [2024-10-05 08:54:35.035585] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:58.599 [2024-10-05 08:54:35.037713] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.599 [2024-10-05 08:54:35.037937] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:58.599 [2024-10-05 08:54:35.037965] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:58.599 [2024-10-05 08:54:35.038041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:58.599 [2024-10-05 08:54:35.038178] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:58.599 [2024-10-05 08:54:35.038187] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:58.599 [2024-10-05 08:54:35.038304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.859 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.859 "name": "raid_bdev1", 00:17:58.859 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:17:58.859 "strip_size_kb": 0, 00:17:58.859 "state": "online", 00:17:58.859 "raid_level": "raid1", 00:17:58.859 "superblock": true, 00:17:58.859 "num_base_bdevs": 2, 00:17:58.859 "num_base_bdevs_discovered": 2, 00:17:58.859 "num_base_bdevs_operational": 2, 00:17:58.859 "base_bdevs_list": [ 00:17:58.859 { 00:17:58.859 "name": "BaseBdev1", 00:17:58.859 "uuid": "75e14771-8bc5-55b1-83fa-d9e2a01262d2", 00:17:58.859 "is_configured": true, 00:17:58.859 "data_offset": 256, 00:17:58.859 "data_size": 7936 00:17:58.859 }, 00:17:58.859 { 00:17:58.859 "name": "BaseBdev2", 00:17:58.859 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:17:58.859 "is_configured": true, 00:17:58.859 "data_offset": 256, 00:17:58.859 "data_size": 7936 00:17:58.859 } 00:17:58.859 ] 00:17:58.859 }' 00:17:58.859 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.859 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.119 [2024-10-05 08:54:35.487103] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:59.119 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:59.379 [2024-10-05 08:54:35.750397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:59.379 /dev/nbd0 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:59.379 1+0 records in 00:17:59.379 1+0 records out 00:17:59.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459975 s, 8.9 MB/s 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:59.379 08:54:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:00.319 7936+0 records in 00:18:00.319 7936+0 records out 00:18:00.319 32505856 bytes (33 MB, 31 MiB) copied, 0.640228 s, 50.8 MB/s 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:00.319 [2024-10-05 08:54:36.689545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.319 [2024-10-05 08:54:36.717479] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.319 "name": "raid_bdev1", 00:18:00.319 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:00.319 "strip_size_kb": 0, 00:18:00.319 "state": "online", 00:18:00.319 "raid_level": "raid1", 00:18:00.319 "superblock": true, 00:18:00.319 "num_base_bdevs": 2, 00:18:00.319 "num_base_bdevs_discovered": 1, 00:18:00.319 "num_base_bdevs_operational": 1, 00:18:00.319 "base_bdevs_list": [ 00:18:00.319 { 00:18:00.319 "name": null, 00:18:00.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.319 "is_configured": false, 00:18:00.319 "data_offset": 0, 00:18:00.319 "data_size": 7936 00:18:00.319 }, 00:18:00.319 { 00:18:00.319 "name": "BaseBdev2", 00:18:00.319 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:00.319 "is_configured": true, 00:18:00.319 "data_offset": 256, 00:18:00.319 "data_size": 7936 00:18:00.319 } 00:18:00.319 ] 00:18:00.319 }' 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.319 08:54:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.889 08:54:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:00.889 08:54:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.889 08:54:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.889 [2024-10-05 08:54:37.157075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.889 [2024-10-05 08:54:37.172465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:00.889 08:54:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.889 08:54:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:00.889 [2024-10-05 08:54:37.174563] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.827 "name": "raid_bdev1", 00:18:01.827 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:01.827 "strip_size_kb": 0, 00:18:01.827 "state": "online", 00:18:01.827 "raid_level": "raid1", 00:18:01.827 "superblock": true, 00:18:01.827 "num_base_bdevs": 2, 00:18:01.827 "num_base_bdevs_discovered": 2, 00:18:01.827 "num_base_bdevs_operational": 2, 00:18:01.827 "process": { 00:18:01.827 "type": "rebuild", 00:18:01.827 "target": "spare", 00:18:01.827 "progress": { 00:18:01.827 "blocks": 2560, 00:18:01.827 "percent": 32 00:18:01.827 } 00:18:01.827 }, 00:18:01.827 "base_bdevs_list": [ 00:18:01.827 { 00:18:01.827 "name": "spare", 00:18:01.827 "uuid": "5e6b7d1d-e11d-5b3d-9c88-9c3fb1ee13e5", 00:18:01.827 "is_configured": true, 00:18:01.827 "data_offset": 256, 00:18:01.827 "data_size": 7936 00:18:01.827 }, 00:18:01.827 { 00:18:01.827 "name": "BaseBdev2", 00:18:01.827 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:01.827 "is_configured": true, 00:18:01.827 "data_offset": 256, 00:18:01.827 "data_size": 7936 00:18:01.827 } 00:18:01.827 ] 00:18:01.827 }' 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.827 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.086 [2024-10-05 08:54:38.330646] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.086 [2024-10-05 08:54:38.383418] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:02.086 [2024-10-05 08:54:38.383490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.086 [2024-10-05 08:54:38.383506] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.086 [2024-10-05 08:54:38.383523] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.086 "name": "raid_bdev1", 00:18:02.086 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:02.086 "strip_size_kb": 0, 00:18:02.086 "state": "online", 00:18:02.086 "raid_level": "raid1", 00:18:02.086 "superblock": true, 00:18:02.086 "num_base_bdevs": 2, 00:18:02.086 "num_base_bdevs_discovered": 1, 00:18:02.086 "num_base_bdevs_operational": 1, 00:18:02.086 "base_bdevs_list": [ 00:18:02.086 { 00:18:02.086 "name": null, 00:18:02.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.086 "is_configured": false, 00:18:02.086 "data_offset": 0, 00:18:02.086 "data_size": 7936 00:18:02.086 }, 00:18:02.086 { 00:18:02.086 "name": "BaseBdev2", 00:18:02.086 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:02.086 "is_configured": true, 00:18:02.086 "data_offset": 256, 00:18:02.086 "data_size": 7936 00:18:02.086 } 00:18:02.086 ] 00:18:02.086 }' 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.086 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.656 "name": "raid_bdev1", 00:18:02.656 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:02.656 "strip_size_kb": 0, 00:18:02.656 "state": "online", 00:18:02.656 "raid_level": "raid1", 00:18:02.656 "superblock": true, 00:18:02.656 "num_base_bdevs": 2, 00:18:02.656 "num_base_bdevs_discovered": 1, 00:18:02.656 "num_base_bdevs_operational": 1, 00:18:02.656 "base_bdevs_list": [ 00:18:02.656 { 00:18:02.656 "name": null, 00:18:02.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.656 "is_configured": false, 00:18:02.656 "data_offset": 0, 00:18:02.656 "data_size": 7936 00:18:02.656 }, 00:18:02.656 { 00:18:02.656 "name": "BaseBdev2", 00:18:02.656 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:02.656 "is_configured": true, 00:18:02.656 "data_offset": 256, 00:18:02.656 "data_size": 7936 00:18:02.656 } 00:18:02.656 ] 00:18:02.656 }' 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.656 08:54:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.656 08:54:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.656 08:54:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:02.656 08:54:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.656 08:54:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.656 [2024-10-05 08:54:39.050829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:02.656 [2024-10-05 08:54:39.064264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:02.656 08:54:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.656 08:54:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:02.656 [2024-10-05 08:54:39.066363] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.039 "name": "raid_bdev1", 00:18:04.039 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:04.039 "strip_size_kb": 0, 00:18:04.039 "state": "online", 00:18:04.039 "raid_level": "raid1", 00:18:04.039 "superblock": true, 00:18:04.039 "num_base_bdevs": 2, 00:18:04.039 "num_base_bdevs_discovered": 2, 00:18:04.039 "num_base_bdevs_operational": 2, 00:18:04.039 "process": { 00:18:04.039 "type": "rebuild", 00:18:04.039 "target": "spare", 00:18:04.039 "progress": { 00:18:04.039 "blocks": 2560, 00:18:04.039 "percent": 32 00:18:04.039 } 00:18:04.039 }, 00:18:04.039 "base_bdevs_list": [ 00:18:04.039 { 00:18:04.039 "name": "spare", 00:18:04.039 "uuid": "5e6b7d1d-e11d-5b3d-9c88-9c3fb1ee13e5", 00:18:04.039 "is_configured": true, 00:18:04.039 "data_offset": 256, 00:18:04.039 "data_size": 7936 00:18:04.039 }, 00:18:04.039 { 00:18:04.039 "name": "BaseBdev2", 00:18:04.039 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:04.039 "is_configured": true, 00:18:04.039 "data_offset": 256, 00:18:04.039 "data_size": 7936 00:18:04.039 } 00:18:04.039 ] 00:18:04.039 }' 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:04.039 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:04.039 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=712 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.040 "name": "raid_bdev1", 00:18:04.040 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:04.040 "strip_size_kb": 0, 00:18:04.040 "state": "online", 00:18:04.040 "raid_level": "raid1", 00:18:04.040 "superblock": true, 00:18:04.040 "num_base_bdevs": 2, 00:18:04.040 "num_base_bdevs_discovered": 2, 00:18:04.040 "num_base_bdevs_operational": 2, 00:18:04.040 "process": { 00:18:04.040 "type": "rebuild", 00:18:04.040 "target": "spare", 00:18:04.040 "progress": { 00:18:04.040 "blocks": 2816, 00:18:04.040 "percent": 35 00:18:04.040 } 00:18:04.040 }, 00:18:04.040 "base_bdevs_list": [ 00:18:04.040 { 00:18:04.040 "name": "spare", 00:18:04.040 "uuid": "5e6b7d1d-e11d-5b3d-9c88-9c3fb1ee13e5", 00:18:04.040 "is_configured": true, 00:18:04.040 "data_offset": 256, 00:18:04.040 "data_size": 7936 00:18:04.040 }, 00:18:04.040 { 00:18:04.040 "name": "BaseBdev2", 00:18:04.040 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:04.040 "is_configured": true, 00:18:04.040 "data_offset": 256, 00:18:04.040 "data_size": 7936 00:18:04.040 } 00:18:04.040 ] 00:18:04.040 }' 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.040 08:54:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:04.978 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:04.978 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.978 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.978 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.979 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.979 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.979 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.979 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.979 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.979 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.979 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.979 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.979 "name": "raid_bdev1", 00:18:04.979 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:04.979 "strip_size_kb": 0, 00:18:04.979 "state": "online", 00:18:04.979 "raid_level": "raid1", 00:18:04.979 "superblock": true, 00:18:04.979 "num_base_bdevs": 2, 00:18:04.979 "num_base_bdevs_discovered": 2, 00:18:04.979 "num_base_bdevs_operational": 2, 00:18:04.979 "process": { 00:18:04.979 "type": "rebuild", 00:18:04.979 "target": "spare", 00:18:04.979 "progress": { 00:18:04.979 "blocks": 5888, 00:18:04.979 "percent": 74 00:18:04.979 } 00:18:04.979 }, 00:18:04.979 "base_bdevs_list": [ 00:18:04.979 { 00:18:04.979 "name": "spare", 00:18:04.979 "uuid": "5e6b7d1d-e11d-5b3d-9c88-9c3fb1ee13e5", 00:18:04.979 "is_configured": true, 00:18:04.979 "data_offset": 256, 00:18:04.979 "data_size": 7936 00:18:04.979 }, 00:18:04.979 { 00:18:04.979 "name": "BaseBdev2", 00:18:04.979 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:04.979 "is_configured": true, 00:18:04.979 "data_offset": 256, 00:18:04.979 "data_size": 7936 00:18:04.979 } 00:18:04.979 ] 00:18:04.979 }' 00:18:04.979 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.239 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.239 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.239 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.239 08:54:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:05.808 [2024-10-05 08:54:42.188562] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:05.808 [2024-10-05 08:54:42.188640] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:05.808 [2024-10-05 08:54:42.188758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:06.068 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:06.068 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.068 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.068 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.068 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.068 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.068 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.068 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.068 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.068 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.328 "name": "raid_bdev1", 00:18:06.328 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:06.328 "strip_size_kb": 0, 00:18:06.328 "state": "online", 00:18:06.328 "raid_level": "raid1", 00:18:06.328 "superblock": true, 00:18:06.328 "num_base_bdevs": 2, 00:18:06.328 "num_base_bdevs_discovered": 2, 00:18:06.328 "num_base_bdevs_operational": 2, 00:18:06.328 "base_bdevs_list": [ 00:18:06.328 { 00:18:06.328 "name": "spare", 00:18:06.328 "uuid": "5e6b7d1d-e11d-5b3d-9c88-9c3fb1ee13e5", 00:18:06.328 "is_configured": true, 00:18:06.328 "data_offset": 256, 00:18:06.328 "data_size": 7936 00:18:06.328 }, 00:18:06.328 { 00:18:06.328 "name": "BaseBdev2", 00:18:06.328 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:06.328 "is_configured": true, 00:18:06.328 "data_offset": 256, 00:18:06.328 "data_size": 7936 00:18:06.328 } 00:18:06.328 ] 00:18:06.328 }' 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.328 "name": "raid_bdev1", 00:18:06.328 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:06.328 "strip_size_kb": 0, 00:18:06.328 "state": "online", 00:18:06.328 "raid_level": "raid1", 00:18:06.328 "superblock": true, 00:18:06.328 "num_base_bdevs": 2, 00:18:06.328 "num_base_bdevs_discovered": 2, 00:18:06.328 "num_base_bdevs_operational": 2, 00:18:06.328 "base_bdevs_list": [ 00:18:06.328 { 00:18:06.328 "name": "spare", 00:18:06.328 "uuid": "5e6b7d1d-e11d-5b3d-9c88-9c3fb1ee13e5", 00:18:06.328 "is_configured": true, 00:18:06.328 "data_offset": 256, 00:18:06.328 "data_size": 7936 00:18:06.328 }, 00:18:06.328 { 00:18:06.328 "name": "BaseBdev2", 00:18:06.328 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:06.328 "is_configured": true, 00:18:06.328 "data_offset": 256, 00:18:06.328 "data_size": 7936 00:18:06.328 } 00:18:06.328 ] 00:18:06.328 }' 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.328 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.588 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.588 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.588 "name": "raid_bdev1", 00:18:06.588 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:06.588 "strip_size_kb": 0, 00:18:06.588 "state": "online", 00:18:06.588 "raid_level": "raid1", 00:18:06.588 "superblock": true, 00:18:06.588 "num_base_bdevs": 2, 00:18:06.588 "num_base_bdevs_discovered": 2, 00:18:06.588 "num_base_bdevs_operational": 2, 00:18:06.588 "base_bdevs_list": [ 00:18:06.588 { 00:18:06.588 "name": "spare", 00:18:06.588 "uuid": "5e6b7d1d-e11d-5b3d-9c88-9c3fb1ee13e5", 00:18:06.588 "is_configured": true, 00:18:06.588 "data_offset": 256, 00:18:06.588 "data_size": 7936 00:18:06.588 }, 00:18:06.588 { 00:18:06.588 "name": "BaseBdev2", 00:18:06.588 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:06.588 "is_configured": true, 00:18:06.588 "data_offset": 256, 00:18:06.588 "data_size": 7936 00:18:06.588 } 00:18:06.588 ] 00:18:06.588 }' 00:18:06.588 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.588 08:54:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.848 [2024-10-05 08:54:43.243128] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.848 [2024-10-05 08:54:43.243223] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:06.848 [2024-10-05 08:54:43.243347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:06.848 [2024-10-05 08:54:43.243443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:06.848 [2024-10-05 08:54:43.243516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:06.848 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:07.108 /dev/nbd0 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.108 1+0 records in 00:18:07.108 1+0 records out 00:18:07.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302847 s, 13.5 MB/s 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:07.108 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:07.369 /dev/nbd1 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:07.369 1+0 records in 00:18:07.369 1+0 records out 00:18:07.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378606 s, 10.8 MB/s 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:07.369 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:07.629 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:07.629 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:07.629 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:07.629 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:07.629 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:07.629 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:07.629 08:54:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:07.889 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:07.889 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:07.889 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:07.889 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:07.889 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:07.889 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:07.889 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:07.889 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:07.889 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:07.889 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.149 [2024-10-05 08:54:44.413008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:08.149 [2024-10-05 08:54:44.413060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.149 [2024-10-05 08:54:44.413080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:08.149 [2024-10-05 08:54:44.413089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.149 [2024-10-05 08:54:44.414948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.149 [2024-10-05 08:54:44.415034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:08.149 [2024-10-05 08:54:44.415096] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:08.149 [2024-10-05 08:54:44.415148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.149 [2024-10-05 08:54:44.415295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:08.149 spare 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.149 [2024-10-05 08:54:44.515176] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:08.149 [2024-10-05 08:54:44.515202] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:08.149 [2024-10-05 08:54:44.515282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:08.149 [2024-10-05 08:54:44.515404] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:08.149 [2024-10-05 08:54:44.515412] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:08.149 [2024-10-05 08:54:44.515520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.149 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.149 "name": "raid_bdev1", 00:18:08.150 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:08.150 "strip_size_kb": 0, 00:18:08.150 "state": "online", 00:18:08.150 "raid_level": "raid1", 00:18:08.150 "superblock": true, 00:18:08.150 "num_base_bdevs": 2, 00:18:08.150 "num_base_bdevs_discovered": 2, 00:18:08.150 "num_base_bdevs_operational": 2, 00:18:08.150 "base_bdevs_list": [ 00:18:08.150 { 00:18:08.150 "name": "spare", 00:18:08.150 "uuid": "5e6b7d1d-e11d-5b3d-9c88-9c3fb1ee13e5", 00:18:08.150 "is_configured": true, 00:18:08.150 "data_offset": 256, 00:18:08.150 "data_size": 7936 00:18:08.150 }, 00:18:08.150 { 00:18:08.150 "name": "BaseBdev2", 00:18:08.150 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:08.150 "is_configured": true, 00:18:08.150 "data_offset": 256, 00:18:08.150 "data_size": 7936 00:18:08.150 } 00:18:08.150 ] 00:18:08.150 }' 00:18:08.150 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.150 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.719 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.719 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.719 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.719 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.719 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.719 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.719 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.719 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.719 08:54:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.719 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.719 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.719 "name": "raid_bdev1", 00:18:08.719 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:08.719 "strip_size_kb": 0, 00:18:08.719 "state": "online", 00:18:08.719 "raid_level": "raid1", 00:18:08.719 "superblock": true, 00:18:08.719 "num_base_bdevs": 2, 00:18:08.719 "num_base_bdevs_discovered": 2, 00:18:08.719 "num_base_bdevs_operational": 2, 00:18:08.719 "base_bdevs_list": [ 00:18:08.719 { 00:18:08.719 "name": "spare", 00:18:08.719 "uuid": "5e6b7d1d-e11d-5b3d-9c88-9c3fb1ee13e5", 00:18:08.719 "is_configured": true, 00:18:08.719 "data_offset": 256, 00:18:08.719 "data_size": 7936 00:18:08.719 }, 00:18:08.719 { 00:18:08.719 "name": "BaseBdev2", 00:18:08.719 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:08.719 "is_configured": true, 00:18:08.719 "data_offset": 256, 00:18:08.719 "data_size": 7936 00:18:08.719 } 00:18:08.719 ] 00:18:08.719 }' 00:18:08.719 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.719 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.720 [2024-10-05 08:54:45.147750] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.720 "name": "raid_bdev1", 00:18:08.720 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:08.720 "strip_size_kb": 0, 00:18:08.720 "state": "online", 00:18:08.720 "raid_level": "raid1", 00:18:08.720 "superblock": true, 00:18:08.720 "num_base_bdevs": 2, 00:18:08.720 "num_base_bdevs_discovered": 1, 00:18:08.720 "num_base_bdevs_operational": 1, 00:18:08.720 "base_bdevs_list": [ 00:18:08.720 { 00:18:08.720 "name": null, 00:18:08.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.720 "is_configured": false, 00:18:08.720 "data_offset": 0, 00:18:08.720 "data_size": 7936 00:18:08.720 }, 00:18:08.720 { 00:18:08.720 "name": "BaseBdev2", 00:18:08.720 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:08.720 "is_configured": true, 00:18:08.720 "data_offset": 256, 00:18:08.720 "data_size": 7936 00:18:08.720 } 00:18:08.720 ] 00:18:08.720 }' 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.720 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.289 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:09.289 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.289 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.289 [2024-10-05 08:54:45.626920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.289 [2024-10-05 08:54:45.627165] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:09.289 [2024-10-05 08:54:45.627230] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:09.289 [2024-10-05 08:54:45.627292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:09.289 [2024-10-05 08:54:45.641396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:09.289 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.289 08:54:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:09.289 [2024-10-05 08:54:45.643263] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:10.229 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:10.229 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.229 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:10.229 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:10.229 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.229 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.229 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.229 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.229 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.229 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.229 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.229 "name": "raid_bdev1", 00:18:10.229 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:10.229 "strip_size_kb": 0, 00:18:10.229 "state": "online", 00:18:10.229 "raid_level": "raid1", 00:18:10.229 "superblock": true, 00:18:10.229 "num_base_bdevs": 2, 00:18:10.229 "num_base_bdevs_discovered": 2, 00:18:10.229 "num_base_bdevs_operational": 2, 00:18:10.229 "process": { 00:18:10.229 "type": "rebuild", 00:18:10.229 "target": "spare", 00:18:10.229 "progress": { 00:18:10.229 "blocks": 2560, 00:18:10.229 "percent": 32 00:18:10.229 } 00:18:10.229 }, 00:18:10.229 "base_bdevs_list": [ 00:18:10.229 { 00:18:10.229 "name": "spare", 00:18:10.229 "uuid": "5e6b7d1d-e11d-5b3d-9c88-9c3fb1ee13e5", 00:18:10.229 "is_configured": true, 00:18:10.229 "data_offset": 256, 00:18:10.229 "data_size": 7936 00:18:10.229 }, 00:18:10.229 { 00:18:10.229 "name": "BaseBdev2", 00:18:10.229 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:10.229 "is_configured": true, 00:18:10.229 "data_offset": 256, 00:18:10.229 "data_size": 7936 00:18:10.229 } 00:18:10.229 ] 00:18:10.229 }' 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.502 [2024-10-05 08:54:46.803303] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.502 [2024-10-05 08:54:46.848962] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:10.502 [2024-10-05 08:54:46.849028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.502 [2024-10-05 08:54:46.849041] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:10.502 [2024-10-05 08:54:46.849050] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.502 "name": "raid_bdev1", 00:18:10.502 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:10.502 "strip_size_kb": 0, 00:18:10.502 "state": "online", 00:18:10.502 "raid_level": "raid1", 00:18:10.502 "superblock": true, 00:18:10.502 "num_base_bdevs": 2, 00:18:10.502 "num_base_bdevs_discovered": 1, 00:18:10.502 "num_base_bdevs_operational": 1, 00:18:10.502 "base_bdevs_list": [ 00:18:10.502 { 00:18:10.502 "name": null, 00:18:10.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.502 "is_configured": false, 00:18:10.502 "data_offset": 0, 00:18:10.502 "data_size": 7936 00:18:10.502 }, 00:18:10.502 { 00:18:10.502 "name": "BaseBdev2", 00:18:10.502 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:10.502 "is_configured": true, 00:18:10.502 "data_offset": 256, 00:18:10.502 "data_size": 7936 00:18:10.502 } 00:18:10.502 ] 00:18:10.502 }' 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.502 08:54:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.089 08:54:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:11.089 08:54:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.089 08:54:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.089 [2024-10-05 08:54:47.327865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:11.089 [2024-10-05 08:54:47.327971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.089 [2024-10-05 08:54:47.328013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:11.089 [2024-10-05 08:54:47.328044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.089 [2024-10-05 08:54:47.328313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.089 [2024-10-05 08:54:47.328368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:11.089 [2024-10-05 08:54:47.328447] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:11.089 [2024-10-05 08:54:47.328484] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:11.089 [2024-10-05 08:54:47.328522] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:11.089 [2024-10-05 08:54:47.328590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.089 [2024-10-05 08:54:47.342438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:11.089 spare 00:18:11.089 08:54:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.089 [2024-10-05 08:54:47.344279] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.089 08:54:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:12.029 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.029 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.030 "name": "raid_bdev1", 00:18:12.030 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:12.030 "strip_size_kb": 0, 00:18:12.030 "state": "online", 00:18:12.030 "raid_level": "raid1", 00:18:12.030 "superblock": true, 00:18:12.030 "num_base_bdevs": 2, 00:18:12.030 "num_base_bdevs_discovered": 2, 00:18:12.030 "num_base_bdevs_operational": 2, 00:18:12.030 "process": { 00:18:12.030 "type": "rebuild", 00:18:12.030 "target": "spare", 00:18:12.030 "progress": { 00:18:12.030 "blocks": 2560, 00:18:12.030 "percent": 32 00:18:12.030 } 00:18:12.030 }, 00:18:12.030 "base_bdevs_list": [ 00:18:12.030 { 00:18:12.030 "name": "spare", 00:18:12.030 "uuid": "5e6b7d1d-e11d-5b3d-9c88-9c3fb1ee13e5", 00:18:12.030 "is_configured": true, 00:18:12.030 "data_offset": 256, 00:18:12.030 "data_size": 7936 00:18:12.030 }, 00:18:12.030 { 00:18:12.030 "name": "BaseBdev2", 00:18:12.030 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:12.030 "is_configured": true, 00:18:12.030 "data_offset": 256, 00:18:12.030 "data_size": 7936 00:18:12.030 } 00:18:12.030 ] 00:18:12.030 }' 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.030 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.030 [2024-10-05 08:54:48.492677] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.289 [2024-10-05 08:54:48.549039] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:12.289 [2024-10-05 08:54:48.549087] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.289 [2024-10-05 08:54:48.549103] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.289 [2024-10-05 08:54:48.549110] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.289 "name": "raid_bdev1", 00:18:12.289 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:12.289 "strip_size_kb": 0, 00:18:12.289 "state": "online", 00:18:12.289 "raid_level": "raid1", 00:18:12.289 "superblock": true, 00:18:12.289 "num_base_bdevs": 2, 00:18:12.289 "num_base_bdevs_discovered": 1, 00:18:12.289 "num_base_bdevs_operational": 1, 00:18:12.289 "base_bdevs_list": [ 00:18:12.289 { 00:18:12.289 "name": null, 00:18:12.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.289 "is_configured": false, 00:18:12.289 "data_offset": 0, 00:18:12.289 "data_size": 7936 00:18:12.289 }, 00:18:12.289 { 00:18:12.289 "name": "BaseBdev2", 00:18:12.289 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:12.289 "is_configured": true, 00:18:12.289 "data_offset": 256, 00:18:12.289 "data_size": 7936 00:18:12.289 } 00:18:12.289 ] 00:18:12.289 }' 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.289 08:54:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.859 "name": "raid_bdev1", 00:18:12.859 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:12.859 "strip_size_kb": 0, 00:18:12.859 "state": "online", 00:18:12.859 "raid_level": "raid1", 00:18:12.859 "superblock": true, 00:18:12.859 "num_base_bdevs": 2, 00:18:12.859 "num_base_bdevs_discovered": 1, 00:18:12.859 "num_base_bdevs_operational": 1, 00:18:12.859 "base_bdevs_list": [ 00:18:12.859 { 00:18:12.859 "name": null, 00:18:12.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.859 "is_configured": false, 00:18:12.859 "data_offset": 0, 00:18:12.859 "data_size": 7936 00:18:12.859 }, 00:18:12.859 { 00:18:12.859 "name": "BaseBdev2", 00:18:12.859 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:12.859 "is_configured": true, 00:18:12.859 "data_offset": 256, 00:18:12.859 "data_size": 7936 00:18:12.859 } 00:18:12.859 ] 00:18:12.859 }' 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.859 [2024-10-05 08:54:49.210982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:12.859 [2024-10-05 08:54:49.211028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.859 [2024-10-05 08:54:49.211051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:12.859 [2024-10-05 08:54:49.211061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.859 [2024-10-05 08:54:49.211259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.859 [2024-10-05 08:54:49.211272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:12.859 [2024-10-05 08:54:49.211319] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:12.859 [2024-10-05 08:54:49.211331] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:12.859 [2024-10-05 08:54:49.211343] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:12.859 [2024-10-05 08:54:49.211352] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:12.859 BaseBdev1 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.859 08:54:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:13.797 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:13.797 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.797 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.797 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.797 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.797 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:13.798 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.798 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.798 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.798 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.798 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.798 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.798 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.798 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.798 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.057 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.057 "name": "raid_bdev1", 00:18:14.057 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:14.057 "strip_size_kb": 0, 00:18:14.057 "state": "online", 00:18:14.057 "raid_level": "raid1", 00:18:14.057 "superblock": true, 00:18:14.057 "num_base_bdevs": 2, 00:18:14.057 "num_base_bdevs_discovered": 1, 00:18:14.057 "num_base_bdevs_operational": 1, 00:18:14.057 "base_bdevs_list": [ 00:18:14.057 { 00:18:14.057 "name": null, 00:18:14.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.057 "is_configured": false, 00:18:14.057 "data_offset": 0, 00:18:14.057 "data_size": 7936 00:18:14.057 }, 00:18:14.057 { 00:18:14.057 "name": "BaseBdev2", 00:18:14.057 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:14.057 "is_configured": true, 00:18:14.057 "data_offset": 256, 00:18:14.057 "data_size": 7936 00:18:14.057 } 00:18:14.057 ] 00:18:14.057 }' 00:18:14.057 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.057 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.317 "name": "raid_bdev1", 00:18:14.317 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:14.317 "strip_size_kb": 0, 00:18:14.317 "state": "online", 00:18:14.317 "raid_level": "raid1", 00:18:14.317 "superblock": true, 00:18:14.317 "num_base_bdevs": 2, 00:18:14.317 "num_base_bdevs_discovered": 1, 00:18:14.317 "num_base_bdevs_operational": 1, 00:18:14.317 "base_bdevs_list": [ 00:18:14.317 { 00:18:14.317 "name": null, 00:18:14.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.317 "is_configured": false, 00:18:14.317 "data_offset": 0, 00:18:14.317 "data_size": 7936 00:18:14.317 }, 00:18:14.317 { 00:18:14.317 "name": "BaseBdev2", 00:18:14.317 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:14.317 "is_configured": true, 00:18:14.317 "data_offset": 256, 00:18:14.317 "data_size": 7936 00:18:14.317 } 00:18:14.317 ] 00:18:14.317 }' 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.317 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.577 [2024-10-05 08:54:50.792324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.577 [2024-10-05 08:54:50.792472] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:14.577 [2024-10-05 08:54:50.792486] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:14.577 request: 00:18:14.577 { 00:18:14.577 "base_bdev": "BaseBdev1", 00:18:14.577 "raid_bdev": "raid_bdev1", 00:18:14.577 "method": "bdev_raid_add_base_bdev", 00:18:14.577 "req_id": 1 00:18:14.577 } 00:18:14.577 Got JSON-RPC error response 00:18:14.577 response: 00:18:14.577 { 00:18:14.577 "code": -22, 00:18:14.577 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:14.577 } 00:18:14.577 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:14.577 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:14.577 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:14.577 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:14.577 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:14.577 08:54:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:15.516 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.516 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.517 "name": "raid_bdev1", 00:18:15.517 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:15.517 "strip_size_kb": 0, 00:18:15.517 "state": "online", 00:18:15.517 "raid_level": "raid1", 00:18:15.517 "superblock": true, 00:18:15.517 "num_base_bdevs": 2, 00:18:15.517 "num_base_bdevs_discovered": 1, 00:18:15.517 "num_base_bdevs_operational": 1, 00:18:15.517 "base_bdevs_list": [ 00:18:15.517 { 00:18:15.517 "name": null, 00:18:15.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.517 "is_configured": false, 00:18:15.517 "data_offset": 0, 00:18:15.517 "data_size": 7936 00:18:15.517 }, 00:18:15.517 { 00:18:15.517 "name": "BaseBdev2", 00:18:15.517 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:15.517 "is_configured": true, 00:18:15.517 "data_offset": 256, 00:18:15.517 "data_size": 7936 00:18:15.517 } 00:18:15.517 ] 00:18:15.517 }' 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.517 08:54:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.776 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:15.776 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.776 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:15.776 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:15.776 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.776 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.776 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.776 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.776 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.036 "name": "raid_bdev1", 00:18:16.036 "uuid": "85c45d62-cca3-42a4-b531-ce44c58173fb", 00:18:16.036 "strip_size_kb": 0, 00:18:16.036 "state": "online", 00:18:16.036 "raid_level": "raid1", 00:18:16.036 "superblock": true, 00:18:16.036 "num_base_bdevs": 2, 00:18:16.036 "num_base_bdevs_discovered": 1, 00:18:16.036 "num_base_bdevs_operational": 1, 00:18:16.036 "base_bdevs_list": [ 00:18:16.036 { 00:18:16.036 "name": null, 00:18:16.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.036 "is_configured": false, 00:18:16.036 "data_offset": 0, 00:18:16.036 "data_size": 7936 00:18:16.036 }, 00:18:16.036 { 00:18:16.036 "name": "BaseBdev2", 00:18:16.036 "uuid": "bcf96514-5bd3-5e96-add8-d5d5c9bdcea8", 00:18:16.036 "is_configured": true, 00:18:16.036 "data_offset": 256, 00:18:16.036 "data_size": 7936 00:18:16.036 } 00:18:16.036 ] 00:18:16.036 }' 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 83920 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 83920 ']' 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 83920 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83920 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:16.036 killing process with pid 83920 00:18:16.036 Received shutdown signal, test time was about 60.000000 seconds 00:18:16.036 00:18:16.036 Latency(us) 00:18:16.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.036 =================================================================================================================== 00:18:16.036 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83920' 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 83920 00:18:16.036 [2024-10-05 08:54:52.396514] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:16.036 [2024-10-05 08:54:52.396621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.036 [2024-10-05 08:54:52.396664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.036 08:54:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 83920 00:18:16.036 [2024-10-05 08:54:52.396675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:16.296 [2024-10-05 08:54:52.697588] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.679 08:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:17.679 00:18:17.679 real 0m19.993s 00:18:17.679 user 0m25.963s 00:18:17.679 sys 0m2.831s 00:18:17.679 08:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:17.679 ************************************ 00:18:17.679 END TEST raid_rebuild_test_sb_md_separate 00:18:17.679 ************************************ 00:18:17.679 08:54:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.679 08:54:53 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:17.679 08:54:53 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:17.679 08:54:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:17.679 08:54:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:17.679 08:54:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.679 ************************************ 00:18:17.679 START TEST raid_state_function_test_sb_md_interleaved 00:18:17.679 ************************************ 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=84496 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84496' 00:18:17.679 Process raid pid: 84496 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 84496 00:18:17.679 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 84496 ']' 00:18:17.680 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.680 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.680 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.680 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.680 08:54:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.680 [2024-10-05 08:54:54.041010] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:18:17.680 [2024-10-05 08:54:54.041126] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.940 [2024-10-05 08:54:54.209606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.940 [2024-10-05 08:54:54.401877] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.200 [2024-10-05 08:54:54.599596] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.200 [2024-10-05 08:54:54.599709] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.460 [2024-10-05 08:54:54.844751] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.460 [2024-10-05 08:54:54.844857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.460 [2024-10-05 08:54:54.844888] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.460 [2024-10-05 08:54:54.844910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.460 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.461 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.461 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.461 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.461 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.461 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.461 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.461 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.461 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.461 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.461 "name": "Existed_Raid", 00:18:18.461 "uuid": "21b3a3cc-b6f5-4ac3-87ad-39a0c0de966f", 00:18:18.461 "strip_size_kb": 0, 00:18:18.461 "state": "configuring", 00:18:18.461 "raid_level": "raid1", 00:18:18.461 "superblock": true, 00:18:18.461 "num_base_bdevs": 2, 00:18:18.461 "num_base_bdevs_discovered": 0, 00:18:18.461 "num_base_bdevs_operational": 2, 00:18:18.461 "base_bdevs_list": [ 00:18:18.461 { 00:18:18.461 "name": "BaseBdev1", 00:18:18.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.461 "is_configured": false, 00:18:18.461 "data_offset": 0, 00:18:18.461 "data_size": 0 00:18:18.461 }, 00:18:18.461 { 00:18:18.461 "name": "BaseBdev2", 00:18:18.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.461 "is_configured": false, 00:18:18.461 "data_offset": 0, 00:18:18.461 "data_size": 0 00:18:18.461 } 00:18:18.461 ] 00:18:18.461 }' 00:18:18.461 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.461 08:54:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.031 [2024-10-05 08:54:55.323822] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:19.031 [2024-10-05 08:54:55.323896] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.031 [2024-10-05 08:54:55.335834] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:19.031 [2024-10-05 08:54:55.335905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:19.031 [2024-10-05 08:54:55.335930] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:19.031 [2024-10-05 08:54:55.335953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.031 [2024-10-05 08:54:55.412024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.031 BaseBdev1 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.031 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.031 [ 00:18:19.031 { 00:18:19.031 "name": "BaseBdev1", 00:18:19.031 "aliases": [ 00:18:19.031 "ee1b59ad-1385-4656-acaf-9b84af02d2c8" 00:18:19.031 ], 00:18:19.031 "product_name": "Malloc disk", 00:18:19.031 "block_size": 4128, 00:18:19.031 "num_blocks": 8192, 00:18:19.031 "uuid": "ee1b59ad-1385-4656-acaf-9b84af02d2c8", 00:18:19.031 "md_size": 32, 00:18:19.031 "md_interleave": true, 00:18:19.031 "dif_type": 0, 00:18:19.031 "assigned_rate_limits": { 00:18:19.031 "rw_ios_per_sec": 0, 00:18:19.031 "rw_mbytes_per_sec": 0, 00:18:19.031 "r_mbytes_per_sec": 0, 00:18:19.031 "w_mbytes_per_sec": 0 00:18:19.031 }, 00:18:19.031 "claimed": true, 00:18:19.031 "claim_type": "exclusive_write", 00:18:19.031 "zoned": false, 00:18:19.031 "supported_io_types": { 00:18:19.031 "read": true, 00:18:19.031 "write": true, 00:18:19.031 "unmap": true, 00:18:19.031 "flush": true, 00:18:19.031 "reset": true, 00:18:19.031 "nvme_admin": false, 00:18:19.031 "nvme_io": false, 00:18:19.031 "nvme_io_md": false, 00:18:19.031 "write_zeroes": true, 00:18:19.031 "zcopy": true, 00:18:19.031 "get_zone_info": false, 00:18:19.031 "zone_management": false, 00:18:19.031 "zone_append": false, 00:18:19.031 "compare": false, 00:18:19.031 "compare_and_write": false, 00:18:19.031 "abort": true, 00:18:19.031 "seek_hole": false, 00:18:19.031 "seek_data": false, 00:18:19.031 "copy": true, 00:18:19.031 "nvme_iov_md": false 00:18:19.032 }, 00:18:19.032 "memory_domains": [ 00:18:19.032 { 00:18:19.032 "dma_device_id": "system", 00:18:19.032 "dma_device_type": 1 00:18:19.032 }, 00:18:19.032 { 00:18:19.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.032 "dma_device_type": 2 00:18:19.032 } 00:18:19.032 ], 00:18:19.032 "driver_specific": {} 00:18:19.032 } 00:18:19.032 ] 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.032 "name": "Existed_Raid", 00:18:19.032 "uuid": "c81ed418-edd2-4dde-88dd-c48bed078dc0", 00:18:19.032 "strip_size_kb": 0, 00:18:19.032 "state": "configuring", 00:18:19.032 "raid_level": "raid1", 00:18:19.032 "superblock": true, 00:18:19.032 "num_base_bdevs": 2, 00:18:19.032 "num_base_bdevs_discovered": 1, 00:18:19.032 "num_base_bdevs_operational": 2, 00:18:19.032 "base_bdevs_list": [ 00:18:19.032 { 00:18:19.032 "name": "BaseBdev1", 00:18:19.032 "uuid": "ee1b59ad-1385-4656-acaf-9b84af02d2c8", 00:18:19.032 "is_configured": true, 00:18:19.032 "data_offset": 256, 00:18:19.032 "data_size": 7936 00:18:19.032 }, 00:18:19.032 { 00:18:19.032 "name": "BaseBdev2", 00:18:19.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.032 "is_configured": false, 00:18:19.032 "data_offset": 0, 00:18:19.032 "data_size": 0 00:18:19.032 } 00:18:19.032 ] 00:18:19.032 }' 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.032 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.602 [2024-10-05 08:54:55.947138] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:19.602 [2024-10-05 08:54:55.947218] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.602 [2024-10-05 08:54:55.959176] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:19.602 [2024-10-05 08:54:55.960911] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:19.602 [2024-10-05 08:54:55.960965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.602 08:54:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.602 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.602 "name": "Existed_Raid", 00:18:19.602 "uuid": "870b3c37-0ecc-420c-8a9f-888cf7e83d71", 00:18:19.602 "strip_size_kb": 0, 00:18:19.602 "state": "configuring", 00:18:19.602 "raid_level": "raid1", 00:18:19.602 "superblock": true, 00:18:19.602 "num_base_bdevs": 2, 00:18:19.602 "num_base_bdevs_discovered": 1, 00:18:19.602 "num_base_bdevs_operational": 2, 00:18:19.602 "base_bdevs_list": [ 00:18:19.602 { 00:18:19.602 "name": "BaseBdev1", 00:18:19.602 "uuid": "ee1b59ad-1385-4656-acaf-9b84af02d2c8", 00:18:19.602 "is_configured": true, 00:18:19.602 "data_offset": 256, 00:18:19.602 "data_size": 7936 00:18:19.602 }, 00:18:19.602 { 00:18:19.602 "name": "BaseBdev2", 00:18:19.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.602 "is_configured": false, 00:18:19.602 "data_offset": 0, 00:18:19.602 "data_size": 0 00:18:19.602 } 00:18:19.602 ] 00:18:19.602 }' 00:18:19.602 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.602 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.172 [2024-10-05 08:54:56.480489] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.172 [2024-10-05 08:54:56.480733] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:20.172 [2024-10-05 08:54:56.480771] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:20.172 [2024-10-05 08:54:56.480885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:20.172 [2024-10-05 08:54:56.481018] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:20.172 [2024-10-05 08:54:56.481057] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:20.172 [2024-10-05 08:54:56.481153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.172 BaseBdev2 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.172 [ 00:18:20.172 { 00:18:20.172 "name": "BaseBdev2", 00:18:20.172 "aliases": [ 00:18:20.172 "d87c5a5e-0a37-4b28-a93d-4bb4c72cf3a8" 00:18:20.172 ], 00:18:20.172 "product_name": "Malloc disk", 00:18:20.172 "block_size": 4128, 00:18:20.172 "num_blocks": 8192, 00:18:20.172 "uuid": "d87c5a5e-0a37-4b28-a93d-4bb4c72cf3a8", 00:18:20.172 "md_size": 32, 00:18:20.172 "md_interleave": true, 00:18:20.172 "dif_type": 0, 00:18:20.172 "assigned_rate_limits": { 00:18:20.172 "rw_ios_per_sec": 0, 00:18:20.172 "rw_mbytes_per_sec": 0, 00:18:20.172 "r_mbytes_per_sec": 0, 00:18:20.172 "w_mbytes_per_sec": 0 00:18:20.172 }, 00:18:20.172 "claimed": true, 00:18:20.172 "claim_type": "exclusive_write", 00:18:20.172 "zoned": false, 00:18:20.172 "supported_io_types": { 00:18:20.172 "read": true, 00:18:20.172 "write": true, 00:18:20.172 "unmap": true, 00:18:20.172 "flush": true, 00:18:20.172 "reset": true, 00:18:20.172 "nvme_admin": false, 00:18:20.172 "nvme_io": false, 00:18:20.172 "nvme_io_md": false, 00:18:20.172 "write_zeroes": true, 00:18:20.172 "zcopy": true, 00:18:20.172 "get_zone_info": false, 00:18:20.172 "zone_management": false, 00:18:20.172 "zone_append": false, 00:18:20.172 "compare": false, 00:18:20.172 "compare_and_write": false, 00:18:20.172 "abort": true, 00:18:20.172 "seek_hole": false, 00:18:20.172 "seek_data": false, 00:18:20.172 "copy": true, 00:18:20.172 "nvme_iov_md": false 00:18:20.172 }, 00:18:20.172 "memory_domains": [ 00:18:20.172 { 00:18:20.172 "dma_device_id": "system", 00:18:20.172 "dma_device_type": 1 00:18:20.172 }, 00:18:20.172 { 00:18:20.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.172 "dma_device_type": 2 00:18:20.172 } 00:18:20.172 ], 00:18:20.172 "driver_specific": {} 00:18:20.172 } 00:18:20.172 ] 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.172 "name": "Existed_Raid", 00:18:20.172 "uuid": "870b3c37-0ecc-420c-8a9f-888cf7e83d71", 00:18:20.172 "strip_size_kb": 0, 00:18:20.172 "state": "online", 00:18:20.172 "raid_level": "raid1", 00:18:20.172 "superblock": true, 00:18:20.172 "num_base_bdevs": 2, 00:18:20.172 "num_base_bdevs_discovered": 2, 00:18:20.172 "num_base_bdevs_operational": 2, 00:18:20.172 "base_bdevs_list": [ 00:18:20.172 { 00:18:20.172 "name": "BaseBdev1", 00:18:20.172 "uuid": "ee1b59ad-1385-4656-acaf-9b84af02d2c8", 00:18:20.172 "is_configured": true, 00:18:20.172 "data_offset": 256, 00:18:20.172 "data_size": 7936 00:18:20.172 }, 00:18:20.172 { 00:18:20.172 "name": "BaseBdev2", 00:18:20.172 "uuid": "d87c5a5e-0a37-4b28-a93d-4bb4c72cf3a8", 00:18:20.172 "is_configured": true, 00:18:20.172 "data_offset": 256, 00:18:20.172 "data_size": 7936 00:18:20.172 } 00:18:20.172 ] 00:18:20.172 }' 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.172 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.742 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:20.742 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:20.742 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:20.742 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:20.742 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:20.742 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:20.742 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:20.742 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.742 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.742 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:20.742 [2024-10-05 08:54:56.955989] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.742 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.742 08:54:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:20.742 "name": "Existed_Raid", 00:18:20.742 "aliases": [ 00:18:20.742 "870b3c37-0ecc-420c-8a9f-888cf7e83d71" 00:18:20.742 ], 00:18:20.742 "product_name": "Raid Volume", 00:18:20.742 "block_size": 4128, 00:18:20.742 "num_blocks": 7936, 00:18:20.742 "uuid": "870b3c37-0ecc-420c-8a9f-888cf7e83d71", 00:18:20.742 "md_size": 32, 00:18:20.742 "md_interleave": true, 00:18:20.742 "dif_type": 0, 00:18:20.742 "assigned_rate_limits": { 00:18:20.742 "rw_ios_per_sec": 0, 00:18:20.742 "rw_mbytes_per_sec": 0, 00:18:20.742 "r_mbytes_per_sec": 0, 00:18:20.742 "w_mbytes_per_sec": 0 00:18:20.742 }, 00:18:20.742 "claimed": false, 00:18:20.742 "zoned": false, 00:18:20.742 "supported_io_types": { 00:18:20.742 "read": true, 00:18:20.742 "write": true, 00:18:20.742 "unmap": false, 00:18:20.742 "flush": false, 00:18:20.742 "reset": true, 00:18:20.742 "nvme_admin": false, 00:18:20.742 "nvme_io": false, 00:18:20.742 "nvme_io_md": false, 00:18:20.742 "write_zeroes": true, 00:18:20.742 "zcopy": false, 00:18:20.742 "get_zone_info": false, 00:18:20.742 "zone_management": false, 00:18:20.742 "zone_append": false, 00:18:20.742 "compare": false, 00:18:20.742 "compare_and_write": false, 00:18:20.742 "abort": false, 00:18:20.742 "seek_hole": false, 00:18:20.742 "seek_data": false, 00:18:20.742 "copy": false, 00:18:20.742 "nvme_iov_md": false 00:18:20.743 }, 00:18:20.743 "memory_domains": [ 00:18:20.743 { 00:18:20.743 "dma_device_id": "system", 00:18:20.743 "dma_device_type": 1 00:18:20.743 }, 00:18:20.743 { 00:18:20.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.743 "dma_device_type": 2 00:18:20.743 }, 00:18:20.743 { 00:18:20.743 "dma_device_id": "system", 00:18:20.743 "dma_device_type": 1 00:18:20.743 }, 00:18:20.743 { 00:18:20.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.743 "dma_device_type": 2 00:18:20.743 } 00:18:20.743 ], 00:18:20.743 "driver_specific": { 00:18:20.743 "raid": { 00:18:20.743 "uuid": "870b3c37-0ecc-420c-8a9f-888cf7e83d71", 00:18:20.743 "strip_size_kb": 0, 00:18:20.743 "state": "online", 00:18:20.743 "raid_level": "raid1", 00:18:20.743 "superblock": true, 00:18:20.743 "num_base_bdevs": 2, 00:18:20.743 "num_base_bdevs_discovered": 2, 00:18:20.743 "num_base_bdevs_operational": 2, 00:18:20.743 "base_bdevs_list": [ 00:18:20.743 { 00:18:20.743 "name": "BaseBdev1", 00:18:20.743 "uuid": "ee1b59ad-1385-4656-acaf-9b84af02d2c8", 00:18:20.743 "is_configured": true, 00:18:20.743 "data_offset": 256, 00:18:20.743 "data_size": 7936 00:18:20.743 }, 00:18:20.743 { 00:18:20.743 "name": "BaseBdev2", 00:18:20.743 "uuid": "d87c5a5e-0a37-4b28-a93d-4bb4c72cf3a8", 00:18:20.743 "is_configured": true, 00:18:20.743 "data_offset": 256, 00:18:20.743 "data_size": 7936 00:18:20.743 } 00:18:20.743 ] 00:18:20.743 } 00:18:20.743 } 00:18:20.743 }' 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:20.743 BaseBdev2' 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.743 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.743 [2024-10-05 08:54:57.183381] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.003 "name": "Existed_Raid", 00:18:21.003 "uuid": "870b3c37-0ecc-420c-8a9f-888cf7e83d71", 00:18:21.003 "strip_size_kb": 0, 00:18:21.003 "state": "online", 00:18:21.003 "raid_level": "raid1", 00:18:21.003 "superblock": true, 00:18:21.003 "num_base_bdevs": 2, 00:18:21.003 "num_base_bdevs_discovered": 1, 00:18:21.003 "num_base_bdevs_operational": 1, 00:18:21.003 "base_bdevs_list": [ 00:18:21.003 { 00:18:21.003 "name": null, 00:18:21.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.003 "is_configured": false, 00:18:21.003 "data_offset": 0, 00:18:21.003 "data_size": 7936 00:18:21.003 }, 00:18:21.003 { 00:18:21.003 "name": "BaseBdev2", 00:18:21.003 "uuid": "d87c5a5e-0a37-4b28-a93d-4bb4c72cf3a8", 00:18:21.003 "is_configured": true, 00:18:21.003 "data_offset": 256, 00:18:21.003 "data_size": 7936 00:18:21.003 } 00:18:21.003 ] 00:18:21.003 }' 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.003 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.263 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:21.263 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:21.263 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:21.263 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.263 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.263 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.263 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.263 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:21.263 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:21.263 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:21.263 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.263 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.263 [2024-10-05 08:54:57.714505] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:21.263 [2024-10-05 08:54:57.714609] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.524 [2024-10-05 08:54:57.806735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.524 [2024-10-05 08:54:57.806864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.524 [2024-10-05 08:54:57.806880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 84496 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 84496 ']' 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 84496 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84496 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:21.524 killing process with pid 84496 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84496' 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 84496 00:18:21.524 [2024-10-05 08:54:57.900347] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:21.524 08:54:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 84496 00:18:21.524 [2024-10-05 08:54:57.916210] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:22.906 08:54:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:22.906 00:18:22.906 real 0m5.157s 00:18:22.906 user 0m7.333s 00:18:22.906 sys 0m0.935s 00:18:22.906 08:54:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:22.906 ************************************ 00:18:22.906 END TEST raid_state_function_test_sb_md_interleaved 00:18:22.906 ************************************ 00:18:22.906 08:54:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.906 08:54:59 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:22.906 08:54:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:22.906 08:54:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:22.906 08:54:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.906 ************************************ 00:18:22.906 START TEST raid_superblock_test_md_interleaved 00:18:22.906 ************************************ 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=84714 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 84714 00:18:22.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 84714 ']' 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.906 08:54:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.906 [2024-10-05 08:54:59.284916] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:18:22.906 [2024-10-05 08:54:59.285187] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84714 ] 00:18:23.167 [2024-10-05 08:54:59.456279] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.427 [2024-10-05 08:54:59.648994] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.427 [2024-10-05 08:54:59.835508] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.427 [2024-10-05 08:54:59.835625] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.713 malloc1 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.713 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.713 [2024-10-05 08:55:00.144428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:23.714 [2024-10-05 08:55:00.144520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.714 [2024-10-05 08:55:00.144559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:23.714 [2024-10-05 08:55:00.144586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.714 [2024-10-05 08:55:00.146394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.714 [2024-10-05 08:55:00.146462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:23.714 pt1 00:18:23.714 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.714 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:23.714 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:23.714 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:23.714 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:23.714 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:23.714 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:23.714 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:23.714 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:23.714 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:23.714 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.714 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.974 malloc2 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.974 [2024-10-05 08:55:00.233073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:23.974 [2024-10-05 08:55:00.233125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:23.974 [2024-10-05 08:55:00.233144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:23.974 [2024-10-05 08:55:00.233153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:23.974 [2024-10-05 08:55:00.234941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:23.974 [2024-10-05 08:55:00.235043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:23.974 pt2 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.974 [2024-10-05 08:55:00.245140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:23.974 [2024-10-05 08:55:00.246889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:23.974 [2024-10-05 08:55:00.247073] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:23.974 [2024-10-05 08:55:00.247088] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:23.974 [2024-10-05 08:55:00.247157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:23.974 [2024-10-05 08:55:00.247216] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:23.974 [2024-10-05 08:55:00.247231] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:23.974 [2024-10-05 08:55:00.247294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.974 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.974 "name": "raid_bdev1", 00:18:23.975 "uuid": "2497e8e3-a134-4b1a-affc-6c50a22337fe", 00:18:23.975 "strip_size_kb": 0, 00:18:23.975 "state": "online", 00:18:23.975 "raid_level": "raid1", 00:18:23.975 "superblock": true, 00:18:23.975 "num_base_bdevs": 2, 00:18:23.975 "num_base_bdevs_discovered": 2, 00:18:23.975 "num_base_bdevs_operational": 2, 00:18:23.975 "base_bdevs_list": [ 00:18:23.975 { 00:18:23.975 "name": "pt1", 00:18:23.975 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:23.975 "is_configured": true, 00:18:23.975 "data_offset": 256, 00:18:23.975 "data_size": 7936 00:18:23.975 }, 00:18:23.975 { 00:18:23.975 "name": "pt2", 00:18:23.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:23.975 "is_configured": true, 00:18:23.975 "data_offset": 256, 00:18:23.975 "data_size": 7936 00:18:23.975 } 00:18:23.975 ] 00:18:23.975 }' 00:18:23.975 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.975 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.545 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:24.545 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:24.545 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:24.545 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:24.545 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:24.545 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:24.545 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:24.545 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.545 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.545 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.545 [2024-10-05 08:55:00.736509] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.545 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.545 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:24.545 "name": "raid_bdev1", 00:18:24.545 "aliases": [ 00:18:24.545 "2497e8e3-a134-4b1a-affc-6c50a22337fe" 00:18:24.545 ], 00:18:24.545 "product_name": "Raid Volume", 00:18:24.545 "block_size": 4128, 00:18:24.545 "num_blocks": 7936, 00:18:24.545 "uuid": "2497e8e3-a134-4b1a-affc-6c50a22337fe", 00:18:24.545 "md_size": 32, 00:18:24.545 "md_interleave": true, 00:18:24.545 "dif_type": 0, 00:18:24.545 "assigned_rate_limits": { 00:18:24.545 "rw_ios_per_sec": 0, 00:18:24.545 "rw_mbytes_per_sec": 0, 00:18:24.545 "r_mbytes_per_sec": 0, 00:18:24.545 "w_mbytes_per_sec": 0 00:18:24.545 }, 00:18:24.545 "claimed": false, 00:18:24.545 "zoned": false, 00:18:24.545 "supported_io_types": { 00:18:24.545 "read": true, 00:18:24.545 "write": true, 00:18:24.545 "unmap": false, 00:18:24.545 "flush": false, 00:18:24.545 "reset": true, 00:18:24.545 "nvme_admin": false, 00:18:24.545 "nvme_io": false, 00:18:24.545 "nvme_io_md": false, 00:18:24.545 "write_zeroes": true, 00:18:24.545 "zcopy": false, 00:18:24.545 "get_zone_info": false, 00:18:24.545 "zone_management": false, 00:18:24.545 "zone_append": false, 00:18:24.545 "compare": false, 00:18:24.545 "compare_and_write": false, 00:18:24.545 "abort": false, 00:18:24.545 "seek_hole": false, 00:18:24.545 "seek_data": false, 00:18:24.545 "copy": false, 00:18:24.545 "nvme_iov_md": false 00:18:24.545 }, 00:18:24.545 "memory_domains": [ 00:18:24.545 { 00:18:24.545 "dma_device_id": "system", 00:18:24.545 "dma_device_type": 1 00:18:24.545 }, 00:18:24.545 { 00:18:24.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.545 "dma_device_type": 2 00:18:24.545 }, 00:18:24.545 { 00:18:24.545 "dma_device_id": "system", 00:18:24.545 "dma_device_type": 1 00:18:24.545 }, 00:18:24.545 { 00:18:24.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.545 "dma_device_type": 2 00:18:24.545 } 00:18:24.545 ], 00:18:24.545 "driver_specific": { 00:18:24.545 "raid": { 00:18:24.545 "uuid": "2497e8e3-a134-4b1a-affc-6c50a22337fe", 00:18:24.545 "strip_size_kb": 0, 00:18:24.545 "state": "online", 00:18:24.545 "raid_level": "raid1", 00:18:24.545 "superblock": true, 00:18:24.545 "num_base_bdevs": 2, 00:18:24.545 "num_base_bdevs_discovered": 2, 00:18:24.545 "num_base_bdevs_operational": 2, 00:18:24.545 "base_bdevs_list": [ 00:18:24.545 { 00:18:24.545 "name": "pt1", 00:18:24.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:24.545 "is_configured": true, 00:18:24.545 "data_offset": 256, 00:18:24.545 "data_size": 7936 00:18:24.545 }, 00:18:24.545 { 00:18:24.545 "name": "pt2", 00:18:24.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.545 "is_configured": true, 00:18:24.545 "data_offset": 256, 00:18:24.545 "data_size": 7936 00:18:24.545 } 00:18:24.546 ] 00:18:24.546 } 00:18:24.546 } 00:18:24.546 }' 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:24.546 pt2' 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.546 [2024-10-05 08:55:00.932194] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2497e8e3-a134-4b1a-affc-6c50a22337fe 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 2497e8e3-a134-4b1a-affc-6c50a22337fe ']' 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.546 [2024-10-05 08:55:00.959884] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.546 [2024-10-05 08:55:00.959906] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.546 [2024-10-05 08:55:00.959983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.546 [2024-10-05 08:55:00.960032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.546 [2024-10-05 08:55:00.960042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.546 08:55:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.805 [2024-10-05 08:55:01.099674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:24.805 [2024-10-05 08:55:01.101517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:24.805 [2024-10-05 08:55:01.101589] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:24.805 [2024-10-05 08:55:01.101635] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:24.805 [2024-10-05 08:55:01.101648] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.805 [2024-10-05 08:55:01.101657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:24.805 request: 00:18:24.805 { 00:18:24.805 "name": "raid_bdev1", 00:18:24.805 "raid_level": "raid1", 00:18:24.805 "base_bdevs": [ 00:18:24.805 "malloc1", 00:18:24.805 "malloc2" 00:18:24.805 ], 00:18:24.805 "superblock": false, 00:18:24.805 "method": "bdev_raid_create", 00:18:24.805 "req_id": 1 00:18:24.805 } 00:18:24.805 Got JSON-RPC error response 00:18:24.805 response: 00:18:24.805 { 00:18:24.805 "code": -17, 00:18:24.805 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:24.805 } 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.805 [2024-10-05 08:55:01.167522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:24.805 [2024-10-05 08:55:01.167607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.805 [2024-10-05 08:55:01.167636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:24.805 [2024-10-05 08:55:01.167661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.805 [2024-10-05 08:55:01.169426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.805 [2024-10-05 08:55:01.169496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:24.805 [2024-10-05 08:55:01.169553] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:24.805 [2024-10-05 08:55:01.169629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:24.805 pt1 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.805 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.806 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.806 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.806 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.806 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.806 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.806 "name": "raid_bdev1", 00:18:24.806 "uuid": "2497e8e3-a134-4b1a-affc-6c50a22337fe", 00:18:24.806 "strip_size_kb": 0, 00:18:24.806 "state": "configuring", 00:18:24.806 "raid_level": "raid1", 00:18:24.806 "superblock": true, 00:18:24.806 "num_base_bdevs": 2, 00:18:24.806 "num_base_bdevs_discovered": 1, 00:18:24.806 "num_base_bdevs_operational": 2, 00:18:24.806 "base_bdevs_list": [ 00:18:24.806 { 00:18:24.806 "name": "pt1", 00:18:24.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:24.806 "is_configured": true, 00:18:24.806 "data_offset": 256, 00:18:24.806 "data_size": 7936 00:18:24.806 }, 00:18:24.806 { 00:18:24.806 "name": null, 00:18:24.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:24.806 "is_configured": false, 00:18:24.806 "data_offset": 256, 00:18:24.806 "data_size": 7936 00:18:24.806 } 00:18:24.806 ] 00:18:24.806 }' 00:18:24.806 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.806 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.375 [2024-10-05 08:55:01.618736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:25.375 [2024-10-05 08:55:01.618823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.375 [2024-10-05 08:55:01.618855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:25.375 [2024-10-05 08:55:01.618882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.375 [2024-10-05 08:55:01.619023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.375 [2024-10-05 08:55:01.619095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:25.375 [2024-10-05 08:55:01.619149] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:25.375 [2024-10-05 08:55:01.619179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:25.375 [2024-10-05 08:55:01.619258] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:25.375 [2024-10-05 08:55:01.619267] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:25.375 [2024-10-05 08:55:01.619345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:25.375 [2024-10-05 08:55:01.619400] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:25.375 [2024-10-05 08:55:01.619407] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:25.375 [2024-10-05 08:55:01.619459] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.375 pt2 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.375 "name": "raid_bdev1", 00:18:25.375 "uuid": "2497e8e3-a134-4b1a-affc-6c50a22337fe", 00:18:25.375 "strip_size_kb": 0, 00:18:25.375 "state": "online", 00:18:25.375 "raid_level": "raid1", 00:18:25.375 "superblock": true, 00:18:25.375 "num_base_bdevs": 2, 00:18:25.375 "num_base_bdevs_discovered": 2, 00:18:25.375 "num_base_bdevs_operational": 2, 00:18:25.375 "base_bdevs_list": [ 00:18:25.375 { 00:18:25.375 "name": "pt1", 00:18:25.375 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.375 "is_configured": true, 00:18:25.375 "data_offset": 256, 00:18:25.375 "data_size": 7936 00:18:25.375 }, 00:18:25.375 { 00:18:25.375 "name": "pt2", 00:18:25.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.375 "is_configured": true, 00:18:25.375 "data_offset": 256, 00:18:25.375 "data_size": 7936 00:18:25.375 } 00:18:25.375 ] 00:18:25.375 }' 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.375 08:55:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.645 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:25.645 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:25.645 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:25.645 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:25.645 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:25.645 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:25.645 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:25.645 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:25.645 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.645 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.645 [2024-10-05 08:55:02.090168] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:25.922 "name": "raid_bdev1", 00:18:25.922 "aliases": [ 00:18:25.922 "2497e8e3-a134-4b1a-affc-6c50a22337fe" 00:18:25.922 ], 00:18:25.922 "product_name": "Raid Volume", 00:18:25.922 "block_size": 4128, 00:18:25.922 "num_blocks": 7936, 00:18:25.922 "uuid": "2497e8e3-a134-4b1a-affc-6c50a22337fe", 00:18:25.922 "md_size": 32, 00:18:25.922 "md_interleave": true, 00:18:25.922 "dif_type": 0, 00:18:25.922 "assigned_rate_limits": { 00:18:25.922 "rw_ios_per_sec": 0, 00:18:25.922 "rw_mbytes_per_sec": 0, 00:18:25.922 "r_mbytes_per_sec": 0, 00:18:25.922 "w_mbytes_per_sec": 0 00:18:25.922 }, 00:18:25.922 "claimed": false, 00:18:25.922 "zoned": false, 00:18:25.922 "supported_io_types": { 00:18:25.922 "read": true, 00:18:25.922 "write": true, 00:18:25.922 "unmap": false, 00:18:25.922 "flush": false, 00:18:25.922 "reset": true, 00:18:25.922 "nvme_admin": false, 00:18:25.922 "nvme_io": false, 00:18:25.922 "nvme_io_md": false, 00:18:25.922 "write_zeroes": true, 00:18:25.922 "zcopy": false, 00:18:25.922 "get_zone_info": false, 00:18:25.922 "zone_management": false, 00:18:25.922 "zone_append": false, 00:18:25.922 "compare": false, 00:18:25.922 "compare_and_write": false, 00:18:25.922 "abort": false, 00:18:25.922 "seek_hole": false, 00:18:25.922 "seek_data": false, 00:18:25.922 "copy": false, 00:18:25.922 "nvme_iov_md": false 00:18:25.922 }, 00:18:25.922 "memory_domains": [ 00:18:25.922 { 00:18:25.922 "dma_device_id": "system", 00:18:25.922 "dma_device_type": 1 00:18:25.922 }, 00:18:25.922 { 00:18:25.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.922 "dma_device_type": 2 00:18:25.922 }, 00:18:25.922 { 00:18:25.922 "dma_device_id": "system", 00:18:25.922 "dma_device_type": 1 00:18:25.922 }, 00:18:25.922 { 00:18:25.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.922 "dma_device_type": 2 00:18:25.922 } 00:18:25.922 ], 00:18:25.922 "driver_specific": { 00:18:25.922 "raid": { 00:18:25.922 "uuid": "2497e8e3-a134-4b1a-affc-6c50a22337fe", 00:18:25.922 "strip_size_kb": 0, 00:18:25.922 "state": "online", 00:18:25.922 "raid_level": "raid1", 00:18:25.922 "superblock": true, 00:18:25.922 "num_base_bdevs": 2, 00:18:25.922 "num_base_bdevs_discovered": 2, 00:18:25.922 "num_base_bdevs_operational": 2, 00:18:25.922 "base_bdevs_list": [ 00:18:25.922 { 00:18:25.922 "name": "pt1", 00:18:25.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.922 "is_configured": true, 00:18:25.922 "data_offset": 256, 00:18:25.922 "data_size": 7936 00:18:25.922 }, 00:18:25.922 { 00:18:25.922 "name": "pt2", 00:18:25.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.922 "is_configured": true, 00:18:25.922 "data_offset": 256, 00:18:25.922 "data_size": 7936 00:18:25.922 } 00:18:25.922 ] 00:18:25.922 } 00:18:25.922 } 00:18:25.922 }' 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:25.922 pt2' 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.922 [2024-10-05 08:55:02.297793] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 2497e8e3-a134-4b1a-affc-6c50a22337fe '!=' 2497e8e3-a134-4b1a-affc-6c50a22337fe ']' 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.922 [2024-10-05 08:55:02.337534] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.922 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.923 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.923 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.923 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.923 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.923 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.923 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.923 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.923 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.923 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.923 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.183 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.183 "name": "raid_bdev1", 00:18:26.183 "uuid": "2497e8e3-a134-4b1a-affc-6c50a22337fe", 00:18:26.183 "strip_size_kb": 0, 00:18:26.183 "state": "online", 00:18:26.183 "raid_level": "raid1", 00:18:26.183 "superblock": true, 00:18:26.183 "num_base_bdevs": 2, 00:18:26.183 "num_base_bdevs_discovered": 1, 00:18:26.183 "num_base_bdevs_operational": 1, 00:18:26.183 "base_bdevs_list": [ 00:18:26.183 { 00:18:26.183 "name": null, 00:18:26.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.183 "is_configured": false, 00:18:26.183 "data_offset": 0, 00:18:26.183 "data_size": 7936 00:18:26.183 }, 00:18:26.183 { 00:18:26.183 "name": "pt2", 00:18:26.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.183 "is_configured": true, 00:18:26.183 "data_offset": 256, 00:18:26.183 "data_size": 7936 00:18:26.183 } 00:18:26.183 ] 00:18:26.183 }' 00:18:26.183 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.183 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.443 [2024-10-05 08:55:02.728934] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.443 [2024-10-05 08:55:02.729009] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.443 [2024-10-05 08:55:02.729083] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.443 [2024-10-05 08:55:02.729133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.443 [2024-10-05 08:55:02.729166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:26.443 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.444 [2024-10-05 08:55:02.784846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:26.444 [2024-10-05 08:55:02.784890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.444 [2024-10-05 08:55:02.784904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:26.444 [2024-10-05 08:55:02.784913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.444 [2024-10-05 08:55:02.786721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.444 [2024-10-05 08:55:02.786808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:26.444 [2024-10-05 08:55:02.786853] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:26.444 [2024-10-05 08:55:02.786895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:26.444 [2024-10-05 08:55:02.786949] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:26.444 [2024-10-05 08:55:02.786971] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:26.444 [2024-10-05 08:55:02.787072] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:26.444 [2024-10-05 08:55:02.787131] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:26.444 [2024-10-05 08:55:02.787138] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:26.444 [2024-10-05 08:55:02.787189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.444 pt2 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.444 "name": "raid_bdev1", 00:18:26.444 "uuid": "2497e8e3-a134-4b1a-affc-6c50a22337fe", 00:18:26.444 "strip_size_kb": 0, 00:18:26.444 "state": "online", 00:18:26.444 "raid_level": "raid1", 00:18:26.444 "superblock": true, 00:18:26.444 "num_base_bdevs": 2, 00:18:26.444 "num_base_bdevs_discovered": 1, 00:18:26.444 "num_base_bdevs_operational": 1, 00:18:26.444 "base_bdevs_list": [ 00:18:26.444 { 00:18:26.444 "name": null, 00:18:26.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.444 "is_configured": false, 00:18:26.444 "data_offset": 256, 00:18:26.444 "data_size": 7936 00:18:26.444 }, 00:18:26.444 { 00:18:26.444 "name": "pt2", 00:18:26.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.444 "is_configured": true, 00:18:26.444 "data_offset": 256, 00:18:26.444 "data_size": 7936 00:18:26.444 } 00:18:26.444 ] 00:18:26.444 }' 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.444 08:55:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.014 [2024-10-05 08:55:03.256049] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.014 [2024-10-05 08:55:03.256113] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.014 [2024-10-05 08:55:03.256179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.014 [2024-10-05 08:55:03.256230] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.014 [2024-10-05 08:55:03.256260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.014 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.014 [2024-10-05 08:55:03.320007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:27.014 [2024-10-05 08:55:03.320087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.014 [2024-10-05 08:55:03.320118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:27.014 [2024-10-05 08:55:03.320144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.014 [2024-10-05 08:55:03.321858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.014 [2024-10-05 08:55:03.321923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:27.014 [2024-10-05 08:55:03.321997] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:27.014 [2024-10-05 08:55:03.322052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:27.015 [2024-10-05 08:55:03.322157] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:27.015 [2024-10-05 08:55:03.322211] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:27.015 [2024-10-05 08:55:03.322255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:27.015 [2024-10-05 08:55:03.322370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:27.015 [2024-10-05 08:55:03.322469] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:27.015 [2024-10-05 08:55:03.322515] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:27.015 [2024-10-05 08:55:03.322583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:27.015 [2024-10-05 08:55:03.322669] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:27.015 [2024-10-05 08:55:03.322706] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:27.015 [2024-10-05 08:55:03.322800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.015 pt1 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.015 "name": "raid_bdev1", 00:18:27.015 "uuid": "2497e8e3-a134-4b1a-affc-6c50a22337fe", 00:18:27.015 "strip_size_kb": 0, 00:18:27.015 "state": "online", 00:18:27.015 "raid_level": "raid1", 00:18:27.015 "superblock": true, 00:18:27.015 "num_base_bdevs": 2, 00:18:27.015 "num_base_bdevs_discovered": 1, 00:18:27.015 "num_base_bdevs_operational": 1, 00:18:27.015 "base_bdevs_list": [ 00:18:27.015 { 00:18:27.015 "name": null, 00:18:27.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.015 "is_configured": false, 00:18:27.015 "data_offset": 256, 00:18:27.015 "data_size": 7936 00:18:27.015 }, 00:18:27.015 { 00:18:27.015 "name": "pt2", 00:18:27.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.015 "is_configured": true, 00:18:27.015 "data_offset": 256, 00:18:27.015 "data_size": 7936 00:18:27.015 } 00:18:27.015 ] 00:18:27.015 }' 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.015 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.585 [2024-10-05 08:55:03.815302] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 2497e8e3-a134-4b1a-affc-6c50a22337fe '!=' 2497e8e3-a134-4b1a-affc-6c50a22337fe ']' 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 84714 00:18:27.585 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 84714 ']' 00:18:27.586 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 84714 00:18:27.586 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:27.586 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:27.586 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84714 00:18:27.586 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:27.586 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:27.586 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84714' 00:18:27.586 killing process with pid 84714 00:18:27.586 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 84714 00:18:27.586 [2024-10-05 08:55:03.897945] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:27.586 [2024-10-05 08:55:03.898062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.586 [2024-10-05 08:55:03.898122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:27.586 [2024-10-05 08:55:03.898171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, sta 08:55:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 84714 00:18:27.586 te offline 00:18:27.846 [2024-10-05 08:55:04.093561] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.231 08:55:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:29.231 00:18:29.231 real 0m6.115s 00:18:29.231 user 0m9.104s 00:18:29.231 sys 0m1.170s 00:18:29.231 08:55:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:29.231 08:55:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.231 ************************************ 00:18:29.231 END TEST raid_superblock_test_md_interleaved 00:18:29.231 ************************************ 00:18:29.231 08:55:05 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:29.231 08:55:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:29.231 08:55:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:29.231 08:55:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.231 ************************************ 00:18:29.231 START TEST raid_rebuild_test_sb_md_interleaved 00:18:29.231 ************************************ 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:29.231 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=85006 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 85006 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 85006 ']' 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:29.232 08:55:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.232 [2024-10-05 08:55:05.477016] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:18:29.232 [2024-10-05 08:55:05.477255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:29.232 Zero copy mechanism will not be used. 00:18:29.232 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85006 ] 00:18:29.232 [2024-10-05 08:55:05.646719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.492 [2024-10-05 08:55:05.843075] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.752 [2024-10-05 08:55:06.035830] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:29.752 [2024-10-05 08:55:06.035969] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.013 BaseBdev1_malloc 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.013 [2024-10-05 08:55:06.347189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:30.013 [2024-10-05 08:55:06.347287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.013 [2024-10-05 08:55:06.347329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:30.013 [2024-10-05 08:55:06.347359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.013 [2024-10-05 08:55:06.349057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.013 [2024-10-05 08:55:06.349094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:30.013 BaseBdev1 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.013 BaseBdev2_malloc 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.013 [2024-10-05 08:55:06.411090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:30.013 [2024-10-05 08:55:06.411155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.013 [2024-10-05 08:55:06.411174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:30.013 [2024-10-05 08:55:06.411185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.013 [2024-10-05 08:55:06.412895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.013 [2024-10-05 08:55:06.412931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:30.013 BaseBdev2 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.013 spare_malloc 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.013 spare_delay 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.013 [2024-10-05 08:55:06.474920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:30.013 [2024-10-05 08:55:06.474981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.013 [2024-10-05 08:55:06.475003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:30.013 [2024-10-05 08:55:06.475013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.013 [2024-10-05 08:55:06.476693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.013 [2024-10-05 08:55:06.476786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:30.013 spare 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.013 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.273 [2024-10-05 08:55:06.486978] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:30.273 [2024-10-05 08:55:06.488724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:30.273 [2024-10-05 08:55:06.488922] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:30.273 [2024-10-05 08:55:06.488937] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:30.273 [2024-10-05 08:55:06.489027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:30.273 [2024-10-05 08:55:06.489094] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:30.273 [2024-10-05 08:55:06.489102] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:30.273 [2024-10-05 08:55:06.489167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.273 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.273 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:30.273 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.273 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.273 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.273 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.273 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:30.273 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.273 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.273 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.273 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.273 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.274 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.274 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.274 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.274 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.274 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.274 "name": "raid_bdev1", 00:18:30.274 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:30.274 "strip_size_kb": 0, 00:18:30.274 "state": "online", 00:18:30.274 "raid_level": "raid1", 00:18:30.274 "superblock": true, 00:18:30.274 "num_base_bdevs": 2, 00:18:30.274 "num_base_bdevs_discovered": 2, 00:18:30.274 "num_base_bdevs_operational": 2, 00:18:30.274 "base_bdevs_list": [ 00:18:30.274 { 00:18:30.274 "name": "BaseBdev1", 00:18:30.274 "uuid": "fb0eeeb0-94d9-5192-b8a1-9069b17b19e3", 00:18:30.274 "is_configured": true, 00:18:30.274 "data_offset": 256, 00:18:30.274 "data_size": 7936 00:18:30.274 }, 00:18:30.274 { 00:18:30.274 "name": "BaseBdev2", 00:18:30.274 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:30.274 "is_configured": true, 00:18:30.274 "data_offset": 256, 00:18:30.274 "data_size": 7936 00:18:30.274 } 00:18:30.274 ] 00:18:30.274 }' 00:18:30.274 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.274 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.533 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:30.533 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:30.533 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.533 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.533 [2024-10-05 08:55:06.934370] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.533 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.533 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:30.533 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.533 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:30.533 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.533 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.533 08:55:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.793 [2024-10-05 08:55:07.018003] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.793 "name": "raid_bdev1", 00:18:30.793 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:30.793 "strip_size_kb": 0, 00:18:30.793 "state": "online", 00:18:30.793 "raid_level": "raid1", 00:18:30.793 "superblock": true, 00:18:30.793 "num_base_bdevs": 2, 00:18:30.793 "num_base_bdevs_discovered": 1, 00:18:30.793 "num_base_bdevs_operational": 1, 00:18:30.793 "base_bdevs_list": [ 00:18:30.793 { 00:18:30.793 "name": null, 00:18:30.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.793 "is_configured": false, 00:18:30.793 "data_offset": 0, 00:18:30.793 "data_size": 7936 00:18:30.793 }, 00:18:30.793 { 00:18:30.793 "name": "BaseBdev2", 00:18:30.793 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:30.793 "is_configured": true, 00:18:30.793 "data_offset": 256, 00:18:30.793 "data_size": 7936 00:18:30.793 } 00:18:30.793 ] 00:18:30.793 }' 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.793 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.052 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:31.052 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.052 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.052 [2024-10-05 08:55:07.465349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.052 [2024-10-05 08:55:07.479912] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:31.052 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.052 08:55:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:31.052 [2024-10-05 08:55:07.481711] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.432 "name": "raid_bdev1", 00:18:32.432 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:32.432 "strip_size_kb": 0, 00:18:32.432 "state": "online", 00:18:32.432 "raid_level": "raid1", 00:18:32.432 "superblock": true, 00:18:32.432 "num_base_bdevs": 2, 00:18:32.432 "num_base_bdevs_discovered": 2, 00:18:32.432 "num_base_bdevs_operational": 2, 00:18:32.432 "process": { 00:18:32.432 "type": "rebuild", 00:18:32.432 "target": "spare", 00:18:32.432 "progress": { 00:18:32.432 "blocks": 2560, 00:18:32.432 "percent": 32 00:18:32.432 } 00:18:32.432 }, 00:18:32.432 "base_bdevs_list": [ 00:18:32.432 { 00:18:32.432 "name": "spare", 00:18:32.432 "uuid": "72b8a8d2-dedf-53f9-b4c1-00aaa56045d1", 00:18:32.432 "is_configured": true, 00:18:32.432 "data_offset": 256, 00:18:32.432 "data_size": 7936 00:18:32.432 }, 00:18:32.432 { 00:18:32.432 "name": "BaseBdev2", 00:18:32.432 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:32.432 "is_configured": true, 00:18:32.432 "data_offset": 256, 00:18:32.432 "data_size": 7936 00:18:32.432 } 00:18:32.432 ] 00:18:32.432 }' 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.432 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.432 [2024-10-05 08:55:08.641485] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.432 [2024-10-05 08:55:08.686437] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:32.432 [2024-10-05 08:55:08.686492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.432 [2024-10-05 08:55:08.686506] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.432 [2024-10-05 08:55:08.686515] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.433 "name": "raid_bdev1", 00:18:32.433 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:32.433 "strip_size_kb": 0, 00:18:32.433 "state": "online", 00:18:32.433 "raid_level": "raid1", 00:18:32.433 "superblock": true, 00:18:32.433 "num_base_bdevs": 2, 00:18:32.433 "num_base_bdevs_discovered": 1, 00:18:32.433 "num_base_bdevs_operational": 1, 00:18:32.433 "base_bdevs_list": [ 00:18:32.433 { 00:18:32.433 "name": null, 00:18:32.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.433 "is_configured": false, 00:18:32.433 "data_offset": 0, 00:18:32.433 "data_size": 7936 00:18:32.433 }, 00:18:32.433 { 00:18:32.433 "name": "BaseBdev2", 00:18:32.433 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:32.433 "is_configured": true, 00:18:32.433 "data_offset": 256, 00:18:32.433 "data_size": 7936 00:18:32.433 } 00:18:32.433 ] 00:18:32.433 }' 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.433 08:55:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.692 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.692 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.692 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.692 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.692 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.692 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.692 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.692 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.692 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.692 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.951 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.951 "name": "raid_bdev1", 00:18:32.951 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:32.951 "strip_size_kb": 0, 00:18:32.951 "state": "online", 00:18:32.951 "raid_level": "raid1", 00:18:32.951 "superblock": true, 00:18:32.951 "num_base_bdevs": 2, 00:18:32.951 "num_base_bdevs_discovered": 1, 00:18:32.951 "num_base_bdevs_operational": 1, 00:18:32.951 "base_bdevs_list": [ 00:18:32.951 { 00:18:32.951 "name": null, 00:18:32.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.951 "is_configured": false, 00:18:32.951 "data_offset": 0, 00:18:32.951 "data_size": 7936 00:18:32.951 }, 00:18:32.951 { 00:18:32.951 "name": "BaseBdev2", 00:18:32.951 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:32.951 "is_configured": true, 00:18:32.951 "data_offset": 256, 00:18:32.951 "data_size": 7936 00:18:32.951 } 00:18:32.951 ] 00:18:32.951 }' 00:18:32.951 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.951 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.951 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.951 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.952 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:32.952 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.952 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.952 [2024-10-05 08:55:09.264947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.952 [2024-10-05 08:55:09.279255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:32.952 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.952 08:55:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:32.952 [2024-10-05 08:55:09.280977] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:33.892 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.892 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.892 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.892 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.892 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.892 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.892 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.892 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.892 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.892 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.892 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.892 "name": "raid_bdev1", 00:18:33.892 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:33.892 "strip_size_kb": 0, 00:18:33.892 "state": "online", 00:18:33.892 "raid_level": "raid1", 00:18:33.892 "superblock": true, 00:18:33.892 "num_base_bdevs": 2, 00:18:33.892 "num_base_bdevs_discovered": 2, 00:18:33.892 "num_base_bdevs_operational": 2, 00:18:33.892 "process": { 00:18:33.892 "type": "rebuild", 00:18:33.892 "target": "spare", 00:18:33.892 "progress": { 00:18:33.892 "blocks": 2560, 00:18:33.892 "percent": 32 00:18:33.892 } 00:18:33.892 }, 00:18:33.892 "base_bdevs_list": [ 00:18:33.892 { 00:18:33.892 "name": "spare", 00:18:33.892 "uuid": "72b8a8d2-dedf-53f9-b4c1-00aaa56045d1", 00:18:33.892 "is_configured": true, 00:18:33.892 "data_offset": 256, 00:18:33.892 "data_size": 7936 00:18:33.892 }, 00:18:33.892 { 00:18:33.892 "name": "BaseBdev2", 00:18:33.892 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:33.892 "is_configured": true, 00:18:33.892 "data_offset": 256, 00:18:33.892 "data_size": 7936 00:18:33.892 } 00:18:33.892 ] 00:18:33.892 }' 00:18:33.892 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:34.152 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=742 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.152 "name": "raid_bdev1", 00:18:34.152 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:34.152 "strip_size_kb": 0, 00:18:34.152 "state": "online", 00:18:34.152 "raid_level": "raid1", 00:18:34.152 "superblock": true, 00:18:34.152 "num_base_bdevs": 2, 00:18:34.152 "num_base_bdevs_discovered": 2, 00:18:34.152 "num_base_bdevs_operational": 2, 00:18:34.152 "process": { 00:18:34.152 "type": "rebuild", 00:18:34.152 "target": "spare", 00:18:34.152 "progress": { 00:18:34.152 "blocks": 2816, 00:18:34.152 "percent": 35 00:18:34.152 } 00:18:34.152 }, 00:18:34.152 "base_bdevs_list": [ 00:18:34.152 { 00:18:34.152 "name": "spare", 00:18:34.152 "uuid": "72b8a8d2-dedf-53f9-b4c1-00aaa56045d1", 00:18:34.152 "is_configured": true, 00:18:34.152 "data_offset": 256, 00:18:34.152 "data_size": 7936 00:18:34.152 }, 00:18:34.152 { 00:18:34.152 "name": "BaseBdev2", 00:18:34.152 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:34.152 "is_configured": true, 00:18:34.152 "data_offset": 256, 00:18:34.152 "data_size": 7936 00:18:34.152 } 00:18:34.152 ] 00:18:34.152 }' 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.152 08:55:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.534 "name": "raid_bdev1", 00:18:35.534 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:35.534 "strip_size_kb": 0, 00:18:35.534 "state": "online", 00:18:35.534 "raid_level": "raid1", 00:18:35.534 "superblock": true, 00:18:35.534 "num_base_bdevs": 2, 00:18:35.534 "num_base_bdevs_discovered": 2, 00:18:35.534 "num_base_bdevs_operational": 2, 00:18:35.534 "process": { 00:18:35.534 "type": "rebuild", 00:18:35.534 "target": "spare", 00:18:35.534 "progress": { 00:18:35.534 "blocks": 5632, 00:18:35.534 "percent": 70 00:18:35.534 } 00:18:35.534 }, 00:18:35.534 "base_bdevs_list": [ 00:18:35.534 { 00:18:35.534 "name": "spare", 00:18:35.534 "uuid": "72b8a8d2-dedf-53f9-b4c1-00aaa56045d1", 00:18:35.534 "is_configured": true, 00:18:35.534 "data_offset": 256, 00:18:35.534 "data_size": 7936 00:18:35.534 }, 00:18:35.534 { 00:18:35.534 "name": "BaseBdev2", 00:18:35.534 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:35.534 "is_configured": true, 00:18:35.534 "data_offset": 256, 00:18:35.534 "data_size": 7936 00:18:35.534 } 00:18:35.534 ] 00:18:35.534 }' 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.534 08:55:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.103 [2024-10-05 08:55:12.392883] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:36.103 [2024-10-05 08:55:12.392945] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:36.103 [2024-10-05 08:55:12.393048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.364 "name": "raid_bdev1", 00:18:36.364 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:36.364 "strip_size_kb": 0, 00:18:36.364 "state": "online", 00:18:36.364 "raid_level": "raid1", 00:18:36.364 "superblock": true, 00:18:36.364 "num_base_bdevs": 2, 00:18:36.364 "num_base_bdevs_discovered": 2, 00:18:36.364 "num_base_bdevs_operational": 2, 00:18:36.364 "base_bdevs_list": [ 00:18:36.364 { 00:18:36.364 "name": "spare", 00:18:36.364 "uuid": "72b8a8d2-dedf-53f9-b4c1-00aaa56045d1", 00:18:36.364 "is_configured": true, 00:18:36.364 "data_offset": 256, 00:18:36.364 "data_size": 7936 00:18:36.364 }, 00:18:36.364 { 00:18:36.364 "name": "BaseBdev2", 00:18:36.364 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:36.364 "is_configured": true, 00:18:36.364 "data_offset": 256, 00:18:36.364 "data_size": 7936 00:18:36.364 } 00:18:36.364 ] 00:18:36.364 }' 00:18:36.364 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.624 "name": "raid_bdev1", 00:18:36.624 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:36.624 "strip_size_kb": 0, 00:18:36.624 "state": "online", 00:18:36.624 "raid_level": "raid1", 00:18:36.624 "superblock": true, 00:18:36.624 "num_base_bdevs": 2, 00:18:36.624 "num_base_bdevs_discovered": 2, 00:18:36.624 "num_base_bdevs_operational": 2, 00:18:36.624 "base_bdevs_list": [ 00:18:36.624 { 00:18:36.624 "name": "spare", 00:18:36.624 "uuid": "72b8a8d2-dedf-53f9-b4c1-00aaa56045d1", 00:18:36.624 "is_configured": true, 00:18:36.624 "data_offset": 256, 00:18:36.624 "data_size": 7936 00:18:36.624 }, 00:18:36.624 { 00:18:36.624 "name": "BaseBdev2", 00:18:36.624 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:36.624 "is_configured": true, 00:18:36.624 "data_offset": 256, 00:18:36.624 "data_size": 7936 00:18:36.624 } 00:18:36.624 ] 00:18:36.624 }' 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:36.624 08:55:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.624 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:36.624 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:36.624 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.624 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.624 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.625 "name": "raid_bdev1", 00:18:36.625 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:36.625 "strip_size_kb": 0, 00:18:36.625 "state": "online", 00:18:36.625 "raid_level": "raid1", 00:18:36.625 "superblock": true, 00:18:36.625 "num_base_bdevs": 2, 00:18:36.625 "num_base_bdevs_discovered": 2, 00:18:36.625 "num_base_bdevs_operational": 2, 00:18:36.625 "base_bdevs_list": [ 00:18:36.625 { 00:18:36.625 "name": "spare", 00:18:36.625 "uuid": "72b8a8d2-dedf-53f9-b4c1-00aaa56045d1", 00:18:36.625 "is_configured": true, 00:18:36.625 "data_offset": 256, 00:18:36.625 "data_size": 7936 00:18:36.625 }, 00:18:36.625 { 00:18:36.625 "name": "BaseBdev2", 00:18:36.625 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:36.625 "is_configured": true, 00:18:36.625 "data_offset": 256, 00:18:36.625 "data_size": 7936 00:18:36.625 } 00:18:36.625 ] 00:18:36.625 }' 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.625 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.194 [2024-10-05 08:55:13.453739] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.194 [2024-10-05 08:55:13.453814] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.194 [2024-10-05 08:55:13.453887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.194 [2024-10-05 08:55:13.453944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.194 [2024-10-05 08:55:13.453952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.194 [2024-10-05 08:55:13.529599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:37.194 [2024-10-05 08:55:13.529647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.194 [2024-10-05 08:55:13.529668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:37.194 [2024-10-05 08:55:13.529677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.194 [2024-10-05 08:55:13.531534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.194 [2024-10-05 08:55:13.531616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:37.194 [2024-10-05 08:55:13.531669] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:37.194 [2024-10-05 08:55:13.531712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:37.194 [2024-10-05 08:55:13.531808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.194 spare 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.194 [2024-10-05 08:55:13.631691] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:37.194 [2024-10-05 08:55:13.631719] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:37.194 [2024-10-05 08:55:13.631803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:37.194 [2024-10-05 08:55:13.631879] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:37.194 [2024-10-05 08:55:13.631886] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:37.194 [2024-10-05 08:55:13.631970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.194 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.195 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.195 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.195 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.195 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.195 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.195 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.195 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.195 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.195 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.195 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.195 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.195 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.455 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.455 "name": "raid_bdev1", 00:18:37.455 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:37.455 "strip_size_kb": 0, 00:18:37.455 "state": "online", 00:18:37.455 "raid_level": "raid1", 00:18:37.455 "superblock": true, 00:18:37.455 "num_base_bdevs": 2, 00:18:37.455 "num_base_bdevs_discovered": 2, 00:18:37.455 "num_base_bdevs_operational": 2, 00:18:37.455 "base_bdevs_list": [ 00:18:37.455 { 00:18:37.455 "name": "spare", 00:18:37.455 "uuid": "72b8a8d2-dedf-53f9-b4c1-00aaa56045d1", 00:18:37.455 "is_configured": true, 00:18:37.455 "data_offset": 256, 00:18:37.455 "data_size": 7936 00:18:37.455 }, 00:18:37.455 { 00:18:37.455 "name": "BaseBdev2", 00:18:37.455 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:37.455 "is_configured": true, 00:18:37.455 "data_offset": 256, 00:18:37.455 "data_size": 7936 00:18:37.455 } 00:18:37.455 ] 00:18:37.455 }' 00:18:37.455 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.455 08:55:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.715 "name": "raid_bdev1", 00:18:37.715 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:37.715 "strip_size_kb": 0, 00:18:37.715 "state": "online", 00:18:37.715 "raid_level": "raid1", 00:18:37.715 "superblock": true, 00:18:37.715 "num_base_bdevs": 2, 00:18:37.715 "num_base_bdevs_discovered": 2, 00:18:37.715 "num_base_bdevs_operational": 2, 00:18:37.715 "base_bdevs_list": [ 00:18:37.715 { 00:18:37.715 "name": "spare", 00:18:37.715 "uuid": "72b8a8d2-dedf-53f9-b4c1-00aaa56045d1", 00:18:37.715 "is_configured": true, 00:18:37.715 "data_offset": 256, 00:18:37.715 "data_size": 7936 00:18:37.715 }, 00:18:37.715 { 00:18:37.715 "name": "BaseBdev2", 00:18:37.715 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:37.715 "is_configured": true, 00:18:37.715 "data_offset": 256, 00:18:37.715 "data_size": 7936 00:18:37.715 } 00:18:37.715 ] 00:18:37.715 }' 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:37.715 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.975 [2024-10-05 08:55:14.260466] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.975 "name": "raid_bdev1", 00:18:37.975 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:37.975 "strip_size_kb": 0, 00:18:37.975 "state": "online", 00:18:37.975 "raid_level": "raid1", 00:18:37.975 "superblock": true, 00:18:37.975 "num_base_bdevs": 2, 00:18:37.975 "num_base_bdevs_discovered": 1, 00:18:37.975 "num_base_bdevs_operational": 1, 00:18:37.975 "base_bdevs_list": [ 00:18:37.975 { 00:18:37.975 "name": null, 00:18:37.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.975 "is_configured": false, 00:18:37.975 "data_offset": 0, 00:18:37.975 "data_size": 7936 00:18:37.975 }, 00:18:37.975 { 00:18:37.975 "name": "BaseBdev2", 00:18:37.975 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:37.975 "is_configured": true, 00:18:37.975 "data_offset": 256, 00:18:37.975 "data_size": 7936 00:18:37.975 } 00:18:37.975 ] 00:18:37.975 }' 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.975 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.545 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:38.545 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.545 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.545 [2024-10-05 08:55:14.739667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.545 [2024-10-05 08:55:14.739802] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:38.545 [2024-10-05 08:55:14.739818] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:38.545 [2024-10-05 08:55:14.739849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.545 [2024-10-05 08:55:14.754762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:38.545 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.545 08:55:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:38.545 [2024-10-05 08:55:14.756507] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.485 "name": "raid_bdev1", 00:18:39.485 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:39.485 "strip_size_kb": 0, 00:18:39.485 "state": "online", 00:18:39.485 "raid_level": "raid1", 00:18:39.485 "superblock": true, 00:18:39.485 "num_base_bdevs": 2, 00:18:39.485 "num_base_bdevs_discovered": 2, 00:18:39.485 "num_base_bdevs_operational": 2, 00:18:39.485 "process": { 00:18:39.485 "type": "rebuild", 00:18:39.485 "target": "spare", 00:18:39.485 "progress": { 00:18:39.485 "blocks": 2560, 00:18:39.485 "percent": 32 00:18:39.485 } 00:18:39.485 }, 00:18:39.485 "base_bdevs_list": [ 00:18:39.485 { 00:18:39.485 "name": "spare", 00:18:39.485 "uuid": "72b8a8d2-dedf-53f9-b4c1-00aaa56045d1", 00:18:39.485 "is_configured": true, 00:18:39.485 "data_offset": 256, 00:18:39.485 "data_size": 7936 00:18:39.485 }, 00:18:39.485 { 00:18:39.485 "name": "BaseBdev2", 00:18:39.485 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:39.485 "is_configured": true, 00:18:39.485 "data_offset": 256, 00:18:39.485 "data_size": 7936 00:18:39.485 } 00:18:39.485 ] 00:18:39.485 }' 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.485 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.486 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.486 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:39.486 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.486 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.486 [2024-10-05 08:55:15.897428] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.750 [2024-10-05 08:55:15.961306] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:39.750 [2024-10-05 08:55:15.961390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.750 [2024-10-05 08:55:15.961404] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.750 [2024-10-05 08:55:15.961413] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.750 08:55:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.751 08:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.751 08:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.751 "name": "raid_bdev1", 00:18:39.751 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:39.751 "strip_size_kb": 0, 00:18:39.751 "state": "online", 00:18:39.751 "raid_level": "raid1", 00:18:39.751 "superblock": true, 00:18:39.751 "num_base_bdevs": 2, 00:18:39.751 "num_base_bdevs_discovered": 1, 00:18:39.751 "num_base_bdevs_operational": 1, 00:18:39.751 "base_bdevs_list": [ 00:18:39.751 { 00:18:39.751 "name": null, 00:18:39.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.751 "is_configured": false, 00:18:39.751 "data_offset": 0, 00:18:39.751 "data_size": 7936 00:18:39.751 }, 00:18:39.751 { 00:18:39.751 "name": "BaseBdev2", 00:18:39.751 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:39.751 "is_configured": true, 00:18:39.751 "data_offset": 256, 00:18:39.751 "data_size": 7936 00:18:39.751 } 00:18:39.751 ] 00:18:39.751 }' 00:18:39.751 08:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.751 08:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.014 08:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:40.014 08:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.014 08:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.014 [2024-10-05 08:55:16.411035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:40.014 [2024-10-05 08:55:16.411087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.014 [2024-10-05 08:55:16.411110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:40.014 [2024-10-05 08:55:16.411122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.014 [2024-10-05 08:55:16.411295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.014 [2024-10-05 08:55:16.411311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:40.014 [2024-10-05 08:55:16.411359] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:40.014 [2024-10-05 08:55:16.411371] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:40.014 [2024-10-05 08:55:16.411384] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:40.014 [2024-10-05 08:55:16.411404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:40.014 [2024-10-05 08:55:16.425897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:40.014 spare 00:18:40.014 08:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.014 08:55:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:40.014 [2024-10-05 08:55:16.427632] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.439 "name": "raid_bdev1", 00:18:41.439 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:41.439 "strip_size_kb": 0, 00:18:41.439 "state": "online", 00:18:41.439 "raid_level": "raid1", 00:18:41.439 "superblock": true, 00:18:41.439 "num_base_bdevs": 2, 00:18:41.439 "num_base_bdevs_discovered": 2, 00:18:41.439 "num_base_bdevs_operational": 2, 00:18:41.439 "process": { 00:18:41.439 "type": "rebuild", 00:18:41.439 "target": "spare", 00:18:41.439 "progress": { 00:18:41.439 "blocks": 2560, 00:18:41.439 "percent": 32 00:18:41.439 } 00:18:41.439 }, 00:18:41.439 "base_bdevs_list": [ 00:18:41.439 { 00:18:41.439 "name": "spare", 00:18:41.439 "uuid": "72b8a8d2-dedf-53f9-b4c1-00aaa56045d1", 00:18:41.439 "is_configured": true, 00:18:41.439 "data_offset": 256, 00:18:41.439 "data_size": 7936 00:18:41.439 }, 00:18:41.439 { 00:18:41.439 "name": "BaseBdev2", 00:18:41.439 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:41.439 "is_configured": true, 00:18:41.439 "data_offset": 256, 00:18:41.439 "data_size": 7936 00:18:41.439 } 00:18:41.439 ] 00:18:41.439 }' 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.439 [2024-10-05 08:55:17.563343] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:41.439 [2024-10-05 08:55:17.632243] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:41.439 [2024-10-05 08:55:17.632292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.439 [2024-10-05 08:55:17.632307] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:41.439 [2024-10-05 08:55:17.632314] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.439 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.439 "name": "raid_bdev1", 00:18:41.439 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:41.439 "strip_size_kb": 0, 00:18:41.439 "state": "online", 00:18:41.439 "raid_level": "raid1", 00:18:41.439 "superblock": true, 00:18:41.440 "num_base_bdevs": 2, 00:18:41.440 "num_base_bdevs_discovered": 1, 00:18:41.440 "num_base_bdevs_operational": 1, 00:18:41.440 "base_bdevs_list": [ 00:18:41.440 { 00:18:41.440 "name": null, 00:18:41.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.440 "is_configured": false, 00:18:41.440 "data_offset": 0, 00:18:41.440 "data_size": 7936 00:18:41.440 }, 00:18:41.440 { 00:18:41.440 "name": "BaseBdev2", 00:18:41.440 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:41.440 "is_configured": true, 00:18:41.440 "data_offset": 256, 00:18:41.440 "data_size": 7936 00:18:41.440 } 00:18:41.440 ] 00:18:41.440 }' 00:18:41.440 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.440 08:55:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.713 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:41.713 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.713 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:41.713 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:41.713 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.713 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.713 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.713 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.713 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.713 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.713 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.713 "name": "raid_bdev1", 00:18:41.713 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:41.713 "strip_size_kb": 0, 00:18:41.713 "state": "online", 00:18:41.713 "raid_level": "raid1", 00:18:41.713 "superblock": true, 00:18:41.713 "num_base_bdevs": 2, 00:18:41.713 "num_base_bdevs_discovered": 1, 00:18:41.713 "num_base_bdevs_operational": 1, 00:18:41.713 "base_bdevs_list": [ 00:18:41.713 { 00:18:41.713 "name": null, 00:18:41.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.713 "is_configured": false, 00:18:41.713 "data_offset": 0, 00:18:41.713 "data_size": 7936 00:18:41.713 }, 00:18:41.713 { 00:18:41.713 "name": "BaseBdev2", 00:18:41.713 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:41.713 "is_configured": true, 00:18:41.713 "data_offset": 256, 00:18:41.713 "data_size": 7936 00:18:41.713 } 00:18:41.713 ] 00:18:41.713 }' 00:18:41.713 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.973 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:41.973 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.973 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:41.973 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:41.973 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.973 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.973 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.973 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:41.973 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.973 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.973 [2024-10-05 08:55:18.258192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:41.973 [2024-10-05 08:55:18.258247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.973 [2024-10-05 08:55:18.258272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:41.973 [2024-10-05 08:55:18.258281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.973 [2024-10-05 08:55:18.258440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.973 [2024-10-05 08:55:18.258451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:41.973 [2024-10-05 08:55:18.258508] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:41.973 [2024-10-05 08:55:18.258519] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:41.973 [2024-10-05 08:55:18.258529] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:41.973 [2024-10-05 08:55:18.258539] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:41.973 BaseBdev1 00:18:41.973 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.973 08:55:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.912 "name": "raid_bdev1", 00:18:42.912 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:42.912 "strip_size_kb": 0, 00:18:42.912 "state": "online", 00:18:42.912 "raid_level": "raid1", 00:18:42.912 "superblock": true, 00:18:42.912 "num_base_bdevs": 2, 00:18:42.912 "num_base_bdevs_discovered": 1, 00:18:42.912 "num_base_bdevs_operational": 1, 00:18:42.912 "base_bdevs_list": [ 00:18:42.912 { 00:18:42.912 "name": null, 00:18:42.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.912 "is_configured": false, 00:18:42.912 "data_offset": 0, 00:18:42.912 "data_size": 7936 00:18:42.912 }, 00:18:42.912 { 00:18:42.912 "name": "BaseBdev2", 00:18:42.912 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:42.912 "is_configured": true, 00:18:42.912 "data_offset": 256, 00:18:42.912 "data_size": 7936 00:18:42.912 } 00:18:42.912 ] 00:18:42.912 }' 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.912 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.481 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.482 "name": "raid_bdev1", 00:18:43.482 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:43.482 "strip_size_kb": 0, 00:18:43.482 "state": "online", 00:18:43.482 "raid_level": "raid1", 00:18:43.482 "superblock": true, 00:18:43.482 "num_base_bdevs": 2, 00:18:43.482 "num_base_bdevs_discovered": 1, 00:18:43.482 "num_base_bdevs_operational": 1, 00:18:43.482 "base_bdevs_list": [ 00:18:43.482 { 00:18:43.482 "name": null, 00:18:43.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.482 "is_configured": false, 00:18:43.482 "data_offset": 0, 00:18:43.482 "data_size": 7936 00:18:43.482 }, 00:18:43.482 { 00:18:43.482 "name": "BaseBdev2", 00:18:43.482 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:43.482 "is_configured": true, 00:18:43.482 "data_offset": 256, 00:18:43.482 "data_size": 7936 00:18:43.482 } 00:18:43.482 ] 00:18:43.482 }' 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.482 [2024-10-05 08:55:19.899385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.482 [2024-10-05 08:55:19.899520] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:43.482 [2024-10-05 08:55:19.899536] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:43.482 request: 00:18:43.482 { 00:18:43.482 "base_bdev": "BaseBdev1", 00:18:43.482 "raid_bdev": "raid_bdev1", 00:18:43.482 "method": "bdev_raid_add_base_bdev", 00:18:43.482 "req_id": 1 00:18:43.482 } 00:18:43.482 Got JSON-RPC error response 00:18:43.482 response: 00:18:43.482 { 00:18:43.482 "code": -22, 00:18:43.482 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:43.482 } 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:43.482 08:55:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.863 "name": "raid_bdev1", 00:18:44.863 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:44.863 "strip_size_kb": 0, 00:18:44.863 "state": "online", 00:18:44.863 "raid_level": "raid1", 00:18:44.863 "superblock": true, 00:18:44.863 "num_base_bdevs": 2, 00:18:44.863 "num_base_bdevs_discovered": 1, 00:18:44.863 "num_base_bdevs_operational": 1, 00:18:44.863 "base_bdevs_list": [ 00:18:44.863 { 00:18:44.863 "name": null, 00:18:44.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.863 "is_configured": false, 00:18:44.863 "data_offset": 0, 00:18:44.863 "data_size": 7936 00:18:44.863 }, 00:18:44.863 { 00:18:44.863 "name": "BaseBdev2", 00:18:44.863 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:44.863 "is_configured": true, 00:18:44.863 "data_offset": 256, 00:18:44.863 "data_size": 7936 00:18:44.863 } 00:18:44.863 ] 00:18:44.863 }' 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.863 08:55:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.123 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.123 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.123 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.123 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.123 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.123 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.123 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.123 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.123 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.123 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.123 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.123 "name": "raid_bdev1", 00:18:45.123 "uuid": "f6bf0320-2569-4b18-aa64-186ec78d0823", 00:18:45.123 "strip_size_kb": 0, 00:18:45.123 "state": "online", 00:18:45.123 "raid_level": "raid1", 00:18:45.123 "superblock": true, 00:18:45.123 "num_base_bdevs": 2, 00:18:45.123 "num_base_bdevs_discovered": 1, 00:18:45.123 "num_base_bdevs_operational": 1, 00:18:45.123 "base_bdevs_list": [ 00:18:45.123 { 00:18:45.123 "name": null, 00:18:45.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.123 "is_configured": false, 00:18:45.124 "data_offset": 0, 00:18:45.124 "data_size": 7936 00:18:45.124 }, 00:18:45.124 { 00:18:45.124 "name": "BaseBdev2", 00:18:45.124 "uuid": "9066af2b-8ab2-5c03-912e-7b2a4be50355", 00:18:45.124 "is_configured": true, 00:18:45.124 "data_offset": 256, 00:18:45.124 "data_size": 7936 00:18:45.124 } 00:18:45.124 ] 00:18:45.124 }' 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 85006 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 85006 ']' 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 85006 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85006 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:45.124 killing process with pid 85006 00:18:45.124 Received shutdown signal, test time was about 60.000000 seconds 00:18:45.124 00:18:45.124 Latency(us) 00:18:45.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.124 =================================================================================================================== 00:18:45.124 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85006' 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 85006 00:18:45.124 [2024-10-05 08:55:21.541205] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:45.124 [2024-10-05 08:55:21.541344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.124 [2024-10-05 08:55:21.541384] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.124 [2024-10-05 08:55:21.541395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:45.124 08:55:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 85006 00:18:45.384 [2024-10-05 08:55:21.822603] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:46.766 08:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:46.766 00:18:46.766 real 0m17.613s 00:18:46.766 user 0m23.083s 00:18:46.766 sys 0m1.723s 00:18:46.766 08:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:46.766 08:55:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.766 ************************************ 00:18:46.766 END TEST raid_rebuild_test_sb_md_interleaved 00:18:46.766 ************************************ 00:18:46.766 08:55:23 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:46.766 08:55:23 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:46.766 08:55:23 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 85006 ']' 00:18:46.766 08:55:23 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 85006 00:18:46.766 08:55:23 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:46.766 00:18:46.766 real 12m4.692s 00:18:46.766 user 16m4.325s 00:18:46.766 sys 2m6.867s 00:18:46.766 08:55:23 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:46.766 08:55:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.766 ************************************ 00:18:46.766 END TEST bdev_raid 00:18:46.766 ************************************ 00:18:46.766 08:55:23 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:46.766 08:55:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:46.766 08:55:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:46.766 08:55:23 -- common/autotest_common.sh@10 -- # set +x 00:18:46.766 ************************************ 00:18:46.766 START TEST spdkcli_raid 00:18:46.766 ************************************ 00:18:46.766 08:55:23 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:47.026 * Looking for test storage... 00:18:47.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:47.026 08:55:23 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:47.026 08:55:23 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:18:47.026 08:55:23 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:47.026 08:55:23 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.026 08:55:23 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:47.026 08:55:23 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.026 08:55:23 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:47.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.026 --rc genhtml_branch_coverage=1 00:18:47.026 --rc genhtml_function_coverage=1 00:18:47.026 --rc genhtml_legend=1 00:18:47.026 --rc geninfo_all_blocks=1 00:18:47.026 --rc geninfo_unexecuted_blocks=1 00:18:47.026 00:18:47.026 ' 00:18:47.026 08:55:23 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:47.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.026 --rc genhtml_branch_coverage=1 00:18:47.026 --rc genhtml_function_coverage=1 00:18:47.026 --rc genhtml_legend=1 00:18:47.026 --rc geninfo_all_blocks=1 00:18:47.026 --rc geninfo_unexecuted_blocks=1 00:18:47.026 00:18:47.026 ' 00:18:47.026 08:55:23 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:47.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.026 --rc genhtml_branch_coverage=1 00:18:47.026 --rc genhtml_function_coverage=1 00:18:47.026 --rc genhtml_legend=1 00:18:47.026 --rc geninfo_all_blocks=1 00:18:47.026 --rc geninfo_unexecuted_blocks=1 00:18:47.026 00:18:47.026 ' 00:18:47.026 08:55:23 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:47.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.026 --rc genhtml_branch_coverage=1 00:18:47.026 --rc genhtml_function_coverage=1 00:18:47.026 --rc genhtml_legend=1 00:18:47.026 --rc geninfo_all_blocks=1 00:18:47.026 --rc geninfo_unexecuted_blocks=1 00:18:47.026 00:18:47.026 ' 00:18:47.026 08:55:23 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:47.026 08:55:23 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:47.026 08:55:23 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:47.026 08:55:23 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:47.026 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:47.026 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:47.026 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:47.026 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:47.026 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:47.027 08:55:23 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:47.027 08:55:23 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.027 08:55:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=85577 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:47.027 08:55:23 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 85577 00:18:47.027 08:55:23 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 85577 ']' 00:18:47.027 08:55:23 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.027 08:55:23 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.027 08:55:23 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.027 08:55:23 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.027 08:55:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.287 [2024-10-05 08:55:23.533736] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:18:47.287 [2024-10-05 08:55:23.533929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85577 ] 00:18:47.287 [2024-10-05 08:55:23.697845] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:47.547 [2024-10-05 08:55:23.893370] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.547 [2024-10-05 08:55:23.893404] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.485 08:55:24 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.485 08:55:24 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:18:48.485 08:55:24 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:48.485 08:55:24 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:48.485 08:55:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.485 08:55:24 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:48.485 08:55:24 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:48.485 08:55:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.485 08:55:24 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:48.485 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:48.485 ' 00:18:49.865 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:49.865 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:50.124 08:55:26 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:50.124 08:55:26 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:50.124 08:55:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.124 08:55:26 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:50.124 08:55:26 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:50.124 08:55:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.124 08:55:26 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:50.124 ' 00:18:51.063 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:51.322 08:55:27 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:51.322 08:55:27 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:51.322 08:55:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.322 08:55:27 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:51.322 08:55:27 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:51.322 08:55:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.322 08:55:27 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:51.322 08:55:27 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:51.890 08:55:28 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:51.890 08:55:28 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:51.890 08:55:28 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:51.890 08:55:28 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:51.890 08:55:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.890 08:55:28 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:51.890 08:55:28 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:51.890 08:55:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.890 08:55:28 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:51.890 ' 00:18:52.826 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:53.085 08:55:29 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:53.085 08:55:29 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:53.085 08:55:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.085 08:55:29 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:53.085 08:55:29 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:53.085 08:55:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.085 08:55:29 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:53.085 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:53.085 ' 00:18:54.467 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:54.467 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:54.467 08:55:30 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:54.467 08:55:30 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:54.467 08:55:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:54.727 08:55:30 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 85577 00:18:54.727 08:55:30 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 85577 ']' 00:18:54.727 08:55:30 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 85577 00:18:54.727 08:55:30 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:18:54.727 08:55:30 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:54.727 08:55:30 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85577 00:18:54.727 08:55:30 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:54.727 08:55:30 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:54.727 08:55:30 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85577' 00:18:54.727 killing process with pid 85577 00:18:54.727 08:55:30 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 85577 00:18:54.727 08:55:30 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 85577 00:18:57.270 08:55:33 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:57.270 08:55:33 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 85577 ']' 00:18:57.270 08:55:33 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 85577 00:18:57.270 08:55:33 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 85577 ']' 00:18:57.270 08:55:33 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 85577 00:18:57.270 Process with pid 85577 is not found 00:18:57.270 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85577) - No such process 00:18:57.270 08:55:33 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 85577 is not found' 00:18:57.270 08:55:33 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:57.270 08:55:33 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:57.270 08:55:33 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:57.270 08:55:33 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:57.270 00:18:57.270 real 0m10.209s 00:18:57.270 user 0m20.766s 00:18:57.270 sys 0m1.157s 00:18:57.270 08:55:33 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:57.270 08:55:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.270 ************************************ 00:18:57.270 END TEST spdkcli_raid 00:18:57.270 ************************************ 00:18:57.270 08:55:33 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:57.270 08:55:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:57.270 08:55:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:57.270 08:55:33 -- common/autotest_common.sh@10 -- # set +x 00:18:57.270 ************************************ 00:18:57.270 START TEST blockdev_raid5f 00:18:57.270 ************************************ 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:57.270 * Looking for test storage... 00:18:57.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:57.270 08:55:33 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:57.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.270 --rc genhtml_branch_coverage=1 00:18:57.270 --rc genhtml_function_coverage=1 00:18:57.270 --rc genhtml_legend=1 00:18:57.270 --rc geninfo_all_blocks=1 00:18:57.270 --rc geninfo_unexecuted_blocks=1 00:18:57.270 00:18:57.270 ' 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:57.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.270 --rc genhtml_branch_coverage=1 00:18:57.270 --rc genhtml_function_coverage=1 00:18:57.270 --rc genhtml_legend=1 00:18:57.270 --rc geninfo_all_blocks=1 00:18:57.270 --rc geninfo_unexecuted_blocks=1 00:18:57.270 00:18:57.270 ' 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:57.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.270 --rc genhtml_branch_coverage=1 00:18:57.270 --rc genhtml_function_coverage=1 00:18:57.270 --rc genhtml_legend=1 00:18:57.270 --rc geninfo_all_blocks=1 00:18:57.270 --rc geninfo_unexecuted_blocks=1 00:18:57.270 00:18:57.270 ' 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:57.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.270 --rc genhtml_branch_coverage=1 00:18:57.270 --rc genhtml_function_coverage=1 00:18:57.270 --rc genhtml_legend=1 00:18:57.270 --rc geninfo_all_blocks=1 00:18:57.270 --rc geninfo_unexecuted_blocks=1 00:18:57.270 00:18:57.270 ' 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=85797 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:57.270 08:55:33 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 85797 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 85797 ']' 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:57.270 08:55:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:57.531 [2024-10-05 08:55:33.809788] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:18:57.531 [2024-10-05 08:55:33.810023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85797 ] 00:18:57.531 [2024-10-05 08:55:33.975589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.791 [2024-10-05 08:55:34.165826] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.746 08:55:34 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:58.746 08:55:34 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:18:58.746 08:55:34 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:58.746 08:55:34 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:58.746 08:55:34 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:58.746 08:55:34 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.746 08:55:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:58.746 Malloc0 00:18:58.746 Malloc1 00:18:58.746 Malloc2 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.746 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.746 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:58.746 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.746 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.746 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.746 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:58.746 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.746 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:58.746 08:55:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:59.024 08:55:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.024 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:59.024 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:59.024 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6df2be3d-21e4-43c3-9ea1-fb9ab3b348de"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6df2be3d-21e4-43c3-9ea1-fb9ab3b348de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6df2be3d-21e4-43c3-9ea1-fb9ab3b348de",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "79ce08ba-c516-4141-aa3c-61491014a159",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7e8239b9-cf84-4d69-b335-6683a00ff889",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d5e9ab0c-0760-4d29-b525-d894d6bbdca7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:59.024 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:59.024 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:59.024 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:59.024 08:55:35 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 85797 00:18:59.024 08:55:35 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 85797 ']' 00:18:59.024 08:55:35 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 85797 00:18:59.024 08:55:35 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:18:59.024 08:55:35 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.024 08:55:35 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85797 00:18:59.024 08:55:35 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.024 08:55:35 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.024 08:55:35 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85797' 00:18:59.024 killing process with pid 85797 00:18:59.024 08:55:35 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 85797 00:18:59.024 08:55:35 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 85797 00:19:01.567 08:55:37 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:01.567 08:55:37 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:01.567 08:55:37 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:01.567 08:55:37 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:01.567 08:55:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:01.567 ************************************ 00:19:01.567 START TEST bdev_hello_world 00:19:01.567 ************************************ 00:19:01.567 08:55:37 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:01.826 [2024-10-05 08:55:38.045784] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:19:01.826 [2024-10-05 08:55:38.045896] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85840 ] 00:19:01.826 [2024-10-05 08:55:38.207488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.086 [2024-10-05 08:55:38.405657] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.657 [2024-10-05 08:55:38.902326] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:02.657 [2024-10-05 08:55:38.902375] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:02.657 [2024-10-05 08:55:38.902391] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:02.657 [2024-10-05 08:55:38.902831] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:02.657 [2024-10-05 08:55:38.902957] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:02.657 [2024-10-05 08:55:38.902985] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:02.657 [2024-10-05 08:55:38.903028] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:02.657 00:19:02.657 [2024-10-05 08:55:38.903044] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:04.040 00:19:04.040 real 0m2.377s 00:19:04.040 user 0m2.013s 00:19:04.040 sys 0m0.245s 00:19:04.040 08:55:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:04.040 08:55:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:04.040 ************************************ 00:19:04.040 END TEST bdev_hello_world 00:19:04.040 ************************************ 00:19:04.040 08:55:40 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:04.040 08:55:40 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:04.040 08:55:40 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:04.040 08:55:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:04.040 ************************************ 00:19:04.040 START TEST bdev_bounds 00:19:04.040 ************************************ 00:19:04.040 08:55:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:19:04.040 08:55:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=85872 00:19:04.040 08:55:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:04.040 08:55:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:04.040 Process bdevio pid: 85872 00:19:04.040 08:55:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 85872' 00:19:04.040 08:55:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 85872 00:19:04.040 08:55:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 85872 ']' 00:19:04.040 08:55:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.040 08:55:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.040 08:55:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.040 08:55:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.040 08:55:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:04.040 [2024-10-05 08:55:40.503777] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:19:04.041 [2024-10-05 08:55:40.503902] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85872 ] 00:19:04.301 [2024-10-05 08:55:40.673673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:04.561 [2024-10-05 08:55:40.872224] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.561 [2024-10-05 08:55:40.872390] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.561 [2024-10-05 08:55:40.872413] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.131 08:55:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.131 08:55:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:19:05.131 08:55:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:05.131 I/O targets: 00:19:05.131 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:05.131 00:19:05.131 00:19:05.131 CUnit - A unit testing framework for C - Version 2.1-3 00:19:05.131 http://cunit.sourceforge.net/ 00:19:05.131 00:19:05.131 00:19:05.131 Suite: bdevio tests on: raid5f 00:19:05.131 Test: blockdev write read block ...passed 00:19:05.131 Test: blockdev write zeroes read block ...passed 00:19:05.131 Test: blockdev write zeroes read no split ...passed 00:19:05.131 Test: blockdev write zeroes read split ...passed 00:19:05.391 Test: blockdev write zeroes read split partial ...passed 00:19:05.391 Test: blockdev reset ...passed 00:19:05.391 Test: blockdev write read 8 blocks ...passed 00:19:05.391 Test: blockdev write read size > 128k ...passed 00:19:05.391 Test: blockdev write read invalid size ...passed 00:19:05.391 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:05.391 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:05.391 Test: blockdev write read max offset ...passed 00:19:05.391 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:05.391 Test: blockdev writev readv 8 blocks ...passed 00:19:05.391 Test: blockdev writev readv 30 x 1block ...passed 00:19:05.391 Test: blockdev writev readv block ...passed 00:19:05.391 Test: blockdev writev readv size > 128k ...passed 00:19:05.391 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:05.391 Test: blockdev comparev and writev ...passed 00:19:05.391 Test: blockdev nvme passthru rw ...passed 00:19:05.391 Test: blockdev nvme passthru vendor specific ...passed 00:19:05.391 Test: blockdev nvme admin passthru ...passed 00:19:05.391 Test: blockdev copy ...passed 00:19:05.391 00:19:05.391 Run Summary: Type Total Ran Passed Failed Inactive 00:19:05.391 suites 1 1 n/a 0 0 00:19:05.391 tests 23 23 23 0 0 00:19:05.391 asserts 130 130 130 0 n/a 00:19:05.391 00:19:05.391 Elapsed time = 0.584 seconds 00:19:05.391 0 00:19:05.391 08:55:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 85872 00:19:05.391 08:55:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 85872 ']' 00:19:05.391 08:55:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 85872 00:19:05.391 08:55:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:05.391 08:55:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.391 08:55:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85872 00:19:05.391 08:55:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:05.391 08:55:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:05.391 killing process with pid 85872 00:19:05.391 08:55:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85872' 00:19:05.391 08:55:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 85872 00:19:05.391 08:55:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 85872 00:19:06.770 08:55:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:06.770 00:19:06.770 real 0m2.831s 00:19:06.770 user 0m6.609s 00:19:06.770 sys 0m0.413s 00:19:06.770 08:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.770 08:55:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:06.770 ************************************ 00:19:06.770 END TEST bdev_bounds 00:19:06.770 ************************************ 00:19:07.030 08:55:43 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:07.030 08:55:43 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:07.030 08:55:43 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:07.030 08:55:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:07.030 ************************************ 00:19:07.030 START TEST bdev_nbd 00:19:07.030 ************************************ 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=85919 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 85919 /var/tmp/spdk-nbd.sock 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 85919 ']' 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.031 08:55:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:07.031 [2024-10-05 08:55:43.421545] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:19:07.031 [2024-10-05 08:55:43.421665] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.302 [2024-10-05 08:55:43.590035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.561 [2024-10-05 08:55:43.787741] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.130 1+0 records in 00:19:08.130 1+0 records out 00:19:08.130 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562571 s, 7.3 MB/s 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:08.130 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:08.388 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:08.388 { 00:19:08.388 "nbd_device": "/dev/nbd0", 00:19:08.388 "bdev_name": "raid5f" 00:19:08.388 } 00:19:08.388 ]' 00:19:08.388 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:08.388 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:08.388 { 00:19:08.388 "nbd_device": "/dev/nbd0", 00:19:08.388 "bdev_name": "raid5f" 00:19:08.388 } 00:19:08.388 ]' 00:19:08.388 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:08.388 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:08.388 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.388 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:08.388 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:08.388 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:08.388 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.388 08:55:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:08.647 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:08.647 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:08.647 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:08.647 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:08.647 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:08.647 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:08.647 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:08.647 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:08.647 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:08.647 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.647 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:08.905 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:09.163 /dev/nbd0 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:09.163 1+0 records in 00:19:09.163 1+0 records out 00:19:09.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474622 s, 8.6 MB/s 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.163 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:09.421 { 00:19:09.421 "nbd_device": "/dev/nbd0", 00:19:09.421 "bdev_name": "raid5f" 00:19:09.421 } 00:19:09.421 ]' 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:09.421 { 00:19:09.421 "nbd_device": "/dev/nbd0", 00:19:09.421 "bdev_name": "raid5f" 00:19:09.421 } 00:19:09.421 ]' 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:09.421 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:09.421 256+0 records in 00:19:09.421 256+0 records out 00:19:09.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133364 s, 78.6 MB/s 00:19:09.422 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:09.422 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:09.422 256+0 records in 00:19:09.422 256+0 records out 00:19:09.422 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298204 s, 35.2 MB/s 00:19:09.422 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:09.422 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:09.422 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:09.422 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:09.422 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:09.422 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:09.422 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:09.422 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:09.422 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:09.422 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:09.680 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:09.680 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.680 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:09.680 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:09.680 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:09.680 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.680 08:55:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:09.680 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:09.680 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:09.680 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:09.681 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.681 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.681 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:09.681 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.681 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.681 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:09.681 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.681 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:09.939 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:10.197 malloc_lvol_verify 00:19:10.197 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:10.456 5500c061-ae3b-485a-9b59-1f869330c9e6 00:19:10.456 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:10.715 9463294d-1ad4-48fd-a06e-c6c419ddd9e2 00:19:10.715 08:55:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:10.715 /dev/nbd0 00:19:10.715 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:10.715 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:10.715 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:10.715 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:10.715 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:10.715 mke2fs 1.47.0 (5-Feb-2023) 00:19:10.974 Discarding device blocks: 0/4096 done 00:19:10.975 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:10.975 00:19:10.975 Allocating group tables: 0/1 done 00:19:10.975 Writing inode tables: 0/1 done 00:19:10.975 Creating journal (1024 blocks): done 00:19:10.975 Writing superblocks and filesystem accounting information: 0/1 done 00:19:10.975 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 85919 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 85919 ']' 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 85919 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:10.975 08:55:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85919 00:19:11.235 08:55:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:11.235 08:55:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:11.235 killing process with pid 85919 00:19:11.235 08:55:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85919' 00:19:11.235 08:55:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 85919 00:19:11.235 08:55:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 85919 00:19:12.618 08:55:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:12.618 00:19:12.618 real 0m5.644s 00:19:12.618 user 0m7.491s 00:19:12.618 sys 0m1.347s 00:19:12.618 08:55:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:12.618 08:55:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:12.618 ************************************ 00:19:12.618 END TEST bdev_nbd 00:19:12.618 ************************************ 00:19:12.618 08:55:49 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:12.618 08:55:49 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:12.618 08:55:49 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:12.618 08:55:49 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:12.618 08:55:49 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:12.618 08:55:49 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:12.618 08:55:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:12.618 ************************************ 00:19:12.618 START TEST bdev_fio 00:19:12.618 ************************************ 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:12.618 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:19:12.618 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:19:12.878 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:12.878 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:19:12.878 08:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:12.878 08:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:12.878 08:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:12.878 08:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:12.878 08:55:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:12.878 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:19:12.878 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:12.879 ************************************ 00:19:12.879 START TEST bdev_fio_rw_verify 00:19:12.879 ************************************ 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:12.879 08:55:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:13.139 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:13.139 fio-3.35 00:19:13.139 Starting 1 thread 00:19:25.385 00:19:25.385 job_raid5f: (groupid=0, jobs=1): err= 0: pid=86084: Sat Oct 5 08:56:00 2024 00:19:25.385 read: IOPS=12.6k, BW=49.4MiB/s (51.8MB/s)(494MiB/10001msec) 00:19:25.385 slat (nsec): min=16699, max=61523, avg=18467.78, stdev=1617.28 00:19:25.385 clat (usec): min=10, max=293, avg=126.83, stdev=43.65 00:19:25.385 lat (usec): min=29, max=338, avg=145.30, stdev=43.82 00:19:25.385 clat percentiles (usec): 00:19:25.385 | 50.000th=[ 131], 99.000th=[ 206], 99.900th=[ 227], 99.990th=[ 262], 00:19:25.385 | 99.999th=[ 293] 00:19:25.385 write: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(509MiB/9878msec); 0 zone resets 00:19:25.385 slat (usec): min=7, max=1140, avg=15.86, stdev= 4.74 00:19:25.385 clat (usec): min=57, max=1753, avg=293.98, stdev=39.61 00:19:25.385 lat (usec): min=72, max=2105, avg=309.84, stdev=40.73 00:19:25.385 clat percentiles (usec): 00:19:25.385 | 50.000th=[ 297], 99.000th=[ 367], 99.900th=[ 586], 99.990th=[ 1172], 00:19:25.385 | 99.999th=[ 1631] 00:19:25.385 bw ( KiB/s): min=50696, max=53808, per=98.92%, avg=52239.58, stdev=918.40, samples=19 00:19:25.385 iops : min=12674, max=13452, avg=13059.89, stdev=229.60, samples=19 00:19:25.385 lat (usec) : 20=0.01%, 50=0.01%, 100=16.99%, 250=38.94%, 500=44.00% 00:19:25.385 lat (usec) : 750=0.04%, 1000=0.02% 00:19:25.385 lat (msec) : 2=0.01% 00:19:25.385 cpu : usr=98.88%, sys=0.45%, ctx=27, majf=0, minf=10294 00:19:25.385 IO depths : 1=7.6%, 2=19.9%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.385 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.385 issued rwts: total=126364,130414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.385 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:25.385 00:19:25.385 Run status group 0 (all jobs): 00:19:25.385 READ: bw=49.4MiB/s (51.8MB/s), 49.4MiB/s-49.4MiB/s (51.8MB/s-51.8MB/s), io=494MiB (518MB), run=10001-10001msec 00:19:25.385 WRITE: bw=51.6MiB/s (54.1MB/s), 51.6MiB/s-51.6MiB/s (54.1MB/s-54.1MB/s), io=509MiB (534MB), run=9878-9878msec 00:19:25.385 ----------------------------------------------------- 00:19:25.385 Suppressions used: 00:19:25.385 count bytes template 00:19:25.385 1 7 /usr/src/fio/parse.c 00:19:25.385 130 12480 /usr/src/fio/iolog.c 00:19:25.385 1 8 libtcmalloc_minimal.so 00:19:25.385 1 904 libcrypto.so 00:19:25.385 ----------------------------------------------------- 00:19:25.385 00:19:25.385 00:19:25.385 real 0m12.641s 00:19:25.385 user 0m12.933s 00:19:25.385 sys 0m0.723s 00:19:25.385 08:56:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.385 08:56:01 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:25.385 ************************************ 00:19:25.385 END TEST bdev_fio_rw_verify 00:19:25.385 ************************************ 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "6df2be3d-21e4-43c3-9ea1-fb9ab3b348de"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6df2be3d-21e4-43c3-9ea1-fb9ab3b348de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "6df2be3d-21e4-43c3-9ea1-fb9ab3b348de",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "79ce08ba-c516-4141-aa3c-61491014a159",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "7e8239b9-cf84-4d69-b335-6683a00ff889",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d5e9ab0c-0760-4d29-b525-d894d6bbdca7",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:25.648 /home/vagrant/spdk_repo/spdk 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:25.648 00:19:25.648 real 0m12.932s 00:19:25.648 user 0m13.062s 00:19:25.648 sys 0m0.855s 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:25.648 08:56:01 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:25.648 ************************************ 00:19:25.648 END TEST bdev_fio 00:19:25.648 ************************************ 00:19:25.648 08:56:02 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:25.648 08:56:02 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:25.648 08:56:02 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:25.648 08:56:02 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:25.648 08:56:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:25.648 ************************************ 00:19:25.648 START TEST bdev_verify 00:19:25.648 ************************************ 00:19:25.648 08:56:02 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:25.908 [2024-10-05 08:56:02.130716] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:19:25.908 [2024-10-05 08:56:02.131308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86176 ] 00:19:25.908 [2024-10-05 08:56:02.295853] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:26.168 [2024-10-05 08:56:02.494089] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.168 [2024-10-05 08:56:02.494132] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.736 Running I/O for 5 seconds... 00:19:31.858 10941.00 IOPS, 42.74 MiB/s 10955.00 IOPS, 42.79 MiB/s 10957.00 IOPS, 42.80 MiB/s 10958.00 IOPS, 42.80 MiB/s 10937.60 IOPS, 42.73 MiB/s 00:19:31.858 Latency(us) 00:19:31.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.858 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:31.858 Verification LBA range: start 0x0 length 0x2000 00:19:31.858 raid5f : 5.02 4417.15 17.25 0.00 0.00 43785.70 296.92 30907.81 00:19:31.859 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:31.859 Verification LBA range: start 0x2000 length 0x2000 00:19:31.859 raid5f : 5.02 6528.28 25.50 0.00 0.00 29547.85 106.87 22322.31 00:19:31.859 =================================================================================================================== 00:19:31.859 Total : 10945.43 42.76 0.00 0.00 35291.35 106.87 30907.81 00:19:33.238 00:19:33.238 real 0m7.419s 00:19:33.238 user 0m13.511s 00:19:33.238 sys 0m0.293s 00:19:33.238 08:56:09 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.238 08:56:09 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:33.238 ************************************ 00:19:33.238 END TEST bdev_verify 00:19:33.238 ************************************ 00:19:33.238 08:56:09 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:33.238 08:56:09 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:33.238 08:56:09 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.238 08:56:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:33.238 ************************************ 00:19:33.238 START TEST bdev_verify_big_io 00:19:33.238 ************************************ 00:19:33.238 08:56:09 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:33.238 [2024-10-05 08:56:09.625710] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:19:33.238 [2024-10-05 08:56:09.625829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86227 ] 00:19:33.498 [2024-10-05 08:56:09.794305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:33.757 [2024-10-05 08:56:09.992737] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.758 [2024-10-05 08:56:09.992755] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.324 Running I/O for 5 seconds... 00:19:39.442 633.00 IOPS, 39.56 MiB/s 760.00 IOPS, 47.50 MiB/s 761.33 IOPS, 47.58 MiB/s 793.25 IOPS, 49.58 MiB/s 799.00 IOPS, 49.94 MiB/s 00:19:39.442 Latency(us) 00:19:39.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.442 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:39.442 Verification LBA range: start 0x0 length 0x200 00:19:39.442 raid5f : 5.30 359.64 22.48 0.00 0.00 8828404.62 211.95 380967.35 00:19:39.442 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:39.442 Verification LBA range: start 0x200 length 0x200 00:19:39.442 raid5f : 5.24 448.39 28.02 0.00 0.00 7088662.37 155.61 313199.12 00:19:39.442 =================================================================================================================== 00:19:39.442 Total : 808.03 50.50 0.00 0.00 7867926.18 155.61 380967.35 00:19:40.841 00:19:40.841 real 0m7.750s 00:19:40.841 user 0m14.169s 00:19:40.841 sys 0m0.282s 00:19:40.841 08:56:17 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:40.841 08:56:17 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:40.841 ************************************ 00:19:40.841 END TEST bdev_verify_big_io 00:19:40.841 ************************************ 00:19:41.100 08:56:17 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:41.100 08:56:17 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:41.100 08:56:17 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:41.100 08:56:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:41.100 ************************************ 00:19:41.100 START TEST bdev_write_zeroes 00:19:41.100 ************************************ 00:19:41.100 08:56:17 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:41.100 [2024-10-05 08:56:17.453871] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:19:41.100 [2024-10-05 08:56:17.454017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86283 ] 00:19:41.360 [2024-10-05 08:56:17.621066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.360 [2024-10-05 08:56:17.813842] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.928 Running I/O for 1 seconds... 00:19:42.862 30447.00 IOPS, 118.93 MiB/s 00:19:42.862 Latency(us) 00:19:42.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.862 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:42.862 raid5f : 1.01 30426.42 118.85 0.00 0.00 4195.87 1223.43 5723.67 00:19:42.862 =================================================================================================================== 00:19:42.863 Total : 30426.42 118.85 0.00 0.00 4195.87 1223.43 5723.67 00:19:44.767 00:19:44.767 real 0m3.417s 00:19:44.767 user 0m2.996s 00:19:44.767 sys 0m0.295s 00:19:44.767 08:56:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:44.767 08:56:20 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:44.767 ************************************ 00:19:44.767 END TEST bdev_write_zeroes 00:19:44.767 ************************************ 00:19:44.767 08:56:20 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:44.767 08:56:20 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:44.767 08:56:20 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:44.767 08:56:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.767 ************************************ 00:19:44.767 START TEST bdev_json_nonenclosed 00:19:44.767 ************************************ 00:19:44.767 08:56:20 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:44.767 [2024-10-05 08:56:20.945858] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:19:44.767 [2024-10-05 08:56:20.945978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86318 ] 00:19:44.767 [2024-10-05 08:56:21.111026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.027 [2024-10-05 08:56:21.311154] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.027 [2024-10-05 08:56:21.311243] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:45.027 [2024-10-05 08:56:21.311259] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:45.027 [2024-10-05 08:56:21.311269] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:45.287 00:19:45.287 real 0m0.846s 00:19:45.287 user 0m0.599s 00:19:45.287 sys 0m0.141s 00:19:45.287 08:56:21 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:45.287 08:56:21 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:45.287 ************************************ 00:19:45.287 END TEST bdev_json_nonenclosed 00:19:45.287 ************************************ 00:19:45.287 08:56:21 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:45.287 08:56:21 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:45.287 08:56:21 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:45.548 08:56:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.548 ************************************ 00:19:45.548 START TEST bdev_json_nonarray 00:19:45.548 ************************************ 00:19:45.548 08:56:21 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:45.548 [2024-10-05 08:56:21.858735] Starting SPDK v25.01-pre git sha1 3950cd1bb / DPDK 24.03.0 initialization... 00:19:45.548 [2024-10-05 08:56:21.858839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86343 ] 00:19:45.807 [2024-10-05 08:56:22.021462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.807 [2024-10-05 08:56:22.220865] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.807 [2024-10-05 08:56:22.220974] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:45.807 [2024-10-05 08:56:22.220993] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:45.807 [2024-10-05 08:56:22.221003] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:46.376 00:19:46.377 real 0m0.840s 00:19:46.377 user 0m0.601s 00:19:46.377 sys 0m0.132s 00:19:46.377 08:56:22 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:46.377 08:56:22 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:46.377 ************************************ 00:19:46.377 END TEST bdev_json_nonarray 00:19:46.377 ************************************ 00:19:46.377 08:56:22 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:46.377 08:56:22 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:46.377 08:56:22 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:46.377 08:56:22 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:46.377 08:56:22 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:46.377 08:56:22 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:46.377 08:56:22 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:46.377 08:56:22 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:46.377 08:56:22 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:46.377 08:56:22 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:46.377 08:56:22 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:46.377 00:19:46.377 real 0m49.238s 00:19:46.377 user 1m5.599s 00:19:46.377 sys 0m5.128s 00:19:46.377 08:56:22 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:46.377 08:56:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.377 ************************************ 00:19:46.377 END TEST blockdev_raid5f 00:19:46.377 ************************************ 00:19:46.377 08:56:22 -- spdk/autotest.sh@194 -- # uname -s 00:19:46.377 08:56:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:46.377 08:56:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:46.377 08:56:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:46.377 08:56:22 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@256 -- # timing_exit lib 00:19:46.377 08:56:22 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:46.377 08:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:46.377 08:56:22 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:46.377 08:56:22 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:19:46.377 08:56:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:46.377 08:56:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:46.377 08:56:22 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:19:46.377 08:56:22 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:19:46.377 08:56:22 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:19:46.377 08:56:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:46.377 08:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:46.377 08:56:22 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:19:46.377 08:56:22 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:19:46.377 08:56:22 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:19:46.377 08:56:22 -- common/autotest_common.sh@10 -- # set +x 00:19:49.060 INFO: APP EXITING 00:19:49.060 INFO: killing all VMs 00:19:49.060 INFO: killing vhost app 00:19:49.060 INFO: EXIT DONE 00:19:49.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:49.320 Waiting for block devices as requested 00:19:49.320 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:49.581 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:50.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:50.521 Cleaning 00:19:50.521 Removing: /var/run/dpdk/spdk0/config 00:19:50.521 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:50.521 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:50.521 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:50.521 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:50.521 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:50.521 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:50.521 Removing: /dev/shm/spdk_tgt_trace.pid56823 00:19:50.521 Removing: /var/run/dpdk/spdk0 00:19:50.521 Removing: /var/run/dpdk/spdk_pid56593 00:19:50.522 Removing: /var/run/dpdk/spdk_pid56823 00:19:50.522 Removing: /var/run/dpdk/spdk_pid57057 00:19:50.522 Removing: /var/run/dpdk/spdk_pid57167 00:19:50.522 Removing: /var/run/dpdk/spdk_pid57223 00:19:50.522 Removing: /var/run/dpdk/spdk_pid57362 00:19:50.522 Removing: /var/run/dpdk/spdk_pid57380 00:19:50.522 Removing: /var/run/dpdk/spdk_pid57596 00:19:50.522 Removing: /var/run/dpdk/spdk_pid57707 00:19:50.522 Removing: /var/run/dpdk/spdk_pid57820 00:19:50.522 Removing: /var/run/dpdk/spdk_pid57947 00:19:50.522 Removing: /var/run/dpdk/spdk_pid58061 00:19:50.522 Removing: /var/run/dpdk/spdk_pid58106 00:19:50.522 Removing: /var/run/dpdk/spdk_pid58148 00:19:50.522 Removing: /var/run/dpdk/spdk_pid58224 00:19:50.522 Removing: /var/run/dpdk/spdk_pid58352 00:19:50.522 Removing: /var/run/dpdk/spdk_pid58794 00:19:50.522 Removing: /var/run/dpdk/spdk_pid58874 00:19:50.522 Removing: /var/run/dpdk/spdk_pid58954 00:19:50.522 Removing: /var/run/dpdk/spdk_pid58975 00:19:50.522 Removing: /var/run/dpdk/spdk_pid59138 00:19:50.522 Removing: /var/run/dpdk/spdk_pid59159 00:19:50.522 Removing: /var/run/dpdk/spdk_pid59315 00:19:50.522 Removing: /var/run/dpdk/spdk_pid59336 00:19:50.522 Removing: /var/run/dpdk/spdk_pid59406 00:19:50.522 Removing: /var/run/dpdk/spdk_pid59429 00:19:50.522 Removing: /var/run/dpdk/spdk_pid59499 00:19:50.522 Removing: /var/run/dpdk/spdk_pid59517 00:19:50.522 Removing: /var/run/dpdk/spdk_pid59723 00:19:50.522 Removing: /var/run/dpdk/spdk_pid59765 00:19:50.522 Removing: /var/run/dpdk/spdk_pid59854 00:19:50.522 Removing: /var/run/dpdk/spdk_pid61062 00:19:50.522 Removing: /var/run/dpdk/spdk_pid61248 00:19:50.782 Removing: /var/run/dpdk/spdk_pid61365 00:19:50.782 Removing: /var/run/dpdk/spdk_pid61924 00:19:50.782 Removing: /var/run/dpdk/spdk_pid62106 00:19:50.782 Removing: /var/run/dpdk/spdk_pid62222 00:19:50.782 Removing: /var/run/dpdk/spdk_pid62786 00:19:50.782 Removing: /var/run/dpdk/spdk_pid63076 00:19:50.782 Removing: /var/run/dpdk/spdk_pid63197 00:19:50.782 Removing: /var/run/dpdk/spdk_pid64433 00:19:50.782 Removing: /var/run/dpdk/spdk_pid64656 00:19:50.782 Removing: /var/run/dpdk/spdk_pid64777 00:19:50.782 Removing: /var/run/dpdk/spdk_pid66012 00:19:50.782 Removing: /var/run/dpdk/spdk_pid66235 00:19:50.782 Removing: /var/run/dpdk/spdk_pid66356 00:19:50.782 Removing: /var/run/dpdk/spdk_pid67592 00:19:50.782 Removing: /var/run/dpdk/spdk_pid67990 00:19:50.782 Removing: /var/run/dpdk/spdk_pid68106 00:19:50.782 Removing: /var/run/dpdk/spdk_pid69429 00:19:50.782 Removing: /var/run/dpdk/spdk_pid69658 00:19:50.782 Removing: /var/run/dpdk/spdk_pid69779 00:19:50.782 Removing: /var/run/dpdk/spdk_pid71108 00:19:50.782 Removing: /var/run/dpdk/spdk_pid71337 00:19:50.782 Removing: /var/run/dpdk/spdk_pid71458 00:19:50.782 Removing: /var/run/dpdk/spdk_pid72779 00:19:50.782 Removing: /var/run/dpdk/spdk_pid73218 00:19:50.782 Removing: /var/run/dpdk/spdk_pid73335 00:19:50.782 Removing: /var/run/dpdk/spdk_pid73457 00:19:50.782 Removing: /var/run/dpdk/spdk_pid73790 00:19:50.782 Removing: /var/run/dpdk/spdk_pid74382 00:19:50.782 Removing: /var/run/dpdk/spdk_pid74682 00:19:50.782 Removing: /var/run/dpdk/spdk_pid75256 00:19:50.782 Removing: /var/run/dpdk/spdk_pid75601 00:19:50.782 Removing: /var/run/dpdk/spdk_pid76211 00:19:50.782 Removing: /var/run/dpdk/spdk_pid76542 00:19:50.782 Removing: /var/run/dpdk/spdk_pid78260 00:19:50.782 Removing: /var/run/dpdk/spdk_pid78665 00:19:50.782 Removing: /var/run/dpdk/spdk_pid79015 00:19:50.782 Removing: /var/run/dpdk/spdk_pid80844 00:19:50.782 Removing: /var/run/dpdk/spdk_pid81277 00:19:50.782 Removing: /var/run/dpdk/spdk_pid81667 00:19:50.782 Removing: /var/run/dpdk/spdk_pid82546 00:19:50.782 Removing: /var/run/dpdk/spdk_pid82833 00:19:50.782 Removing: /var/run/dpdk/spdk_pid83626 00:19:50.782 Removing: /var/run/dpdk/spdk_pid83920 00:19:50.782 Removing: /var/run/dpdk/spdk_pid84714 00:19:50.782 Removing: /var/run/dpdk/spdk_pid85006 00:19:50.782 Removing: /var/run/dpdk/spdk_pid85577 00:19:50.782 Removing: /var/run/dpdk/spdk_pid85797 00:19:50.782 Removing: /var/run/dpdk/spdk_pid85840 00:19:50.782 Removing: /var/run/dpdk/spdk_pid85872 00:19:50.782 Removing: /var/run/dpdk/spdk_pid86075 00:19:50.782 Removing: /var/run/dpdk/spdk_pid86176 00:19:50.782 Removing: /var/run/dpdk/spdk_pid86227 00:19:50.782 Removing: /var/run/dpdk/spdk_pid86283 00:19:50.782 Removing: /var/run/dpdk/spdk_pid86318 00:19:50.782 Removing: /var/run/dpdk/spdk_pid86343 00:19:50.782 Clean 00:19:51.043 08:56:27 -- common/autotest_common.sh@1451 -- # return 0 00:19:51.043 08:56:27 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:19:51.043 08:56:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:51.043 08:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:51.043 08:56:27 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:19:51.043 08:56:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:51.043 08:56:27 -- common/autotest_common.sh@10 -- # set +x 00:19:51.043 08:56:27 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:51.043 08:56:27 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:51.043 08:56:27 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:51.043 08:56:27 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:19:51.043 08:56:27 -- spdk/autotest.sh@394 -- # hostname 00:19:51.043 08:56:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:51.303 geninfo: WARNING: invalid characters removed from testname! 00:20:17.873 08:56:52 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:18.811 08:56:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:20.716 08:56:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:23.247 08:56:59 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:25.150 08:57:01 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:27.055 08:57:03 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:28.965 08:57:05 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:28.965 08:57:05 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:20:28.965 08:57:05 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:20:28.965 08:57:05 -- common/autotest_common.sh@1681 -- $ lcov --version 00:20:28.965 08:57:05 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:20:28.965 08:57:05 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:20:28.965 08:57:05 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:20:28.965 08:57:05 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:20:28.965 08:57:05 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:28.965 08:57:05 -- scripts/common.sh@336 -- $ read -ra ver1 00:20:28.965 08:57:05 -- scripts/common.sh@337 -- $ IFS=.-: 00:20:28.965 08:57:05 -- scripts/common.sh@337 -- $ read -ra ver2 00:20:28.965 08:57:05 -- scripts/common.sh@338 -- $ local 'op=<' 00:20:28.965 08:57:05 -- scripts/common.sh@340 -- $ ver1_l=2 00:20:28.965 08:57:05 -- scripts/common.sh@341 -- $ ver2_l=1 00:20:28.965 08:57:05 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:20:28.965 08:57:05 -- scripts/common.sh@344 -- $ case "$op" in 00:20:28.965 08:57:05 -- scripts/common.sh@345 -- $ : 1 00:20:28.965 08:57:05 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:20:28.965 08:57:05 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.965 08:57:05 -- scripts/common.sh@365 -- $ decimal 1 00:20:28.965 08:57:05 -- scripts/common.sh@353 -- $ local d=1 00:20:28.965 08:57:05 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:28.965 08:57:05 -- scripts/common.sh@355 -- $ echo 1 00:20:28.965 08:57:05 -- scripts/common.sh@365 -- $ ver1[v]=1 00:20:28.965 08:57:05 -- scripts/common.sh@366 -- $ decimal 2 00:20:28.965 08:57:05 -- scripts/common.sh@353 -- $ local d=2 00:20:28.965 08:57:05 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:28.965 08:57:05 -- scripts/common.sh@355 -- $ echo 2 00:20:28.965 08:57:05 -- scripts/common.sh@366 -- $ ver2[v]=2 00:20:28.965 08:57:05 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:20:28.965 08:57:05 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:20:28.965 08:57:05 -- scripts/common.sh@368 -- $ return 0 00:20:28.965 08:57:05 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.965 08:57:05 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:20:28.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.965 --rc genhtml_branch_coverage=1 00:20:28.965 --rc genhtml_function_coverage=1 00:20:28.965 --rc genhtml_legend=1 00:20:28.965 --rc geninfo_all_blocks=1 00:20:28.965 --rc geninfo_unexecuted_blocks=1 00:20:28.965 00:20:28.965 ' 00:20:28.965 08:57:05 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:20:28.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.965 --rc genhtml_branch_coverage=1 00:20:28.965 --rc genhtml_function_coverage=1 00:20:28.965 --rc genhtml_legend=1 00:20:28.965 --rc geninfo_all_blocks=1 00:20:28.965 --rc geninfo_unexecuted_blocks=1 00:20:28.965 00:20:28.965 ' 00:20:28.965 08:57:05 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:20:28.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.965 --rc genhtml_branch_coverage=1 00:20:28.965 --rc genhtml_function_coverage=1 00:20:28.965 --rc genhtml_legend=1 00:20:28.965 --rc geninfo_all_blocks=1 00:20:28.965 --rc geninfo_unexecuted_blocks=1 00:20:28.965 00:20:28.965 ' 00:20:28.965 08:57:05 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:20:28.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.965 --rc genhtml_branch_coverage=1 00:20:28.965 --rc genhtml_function_coverage=1 00:20:28.965 --rc genhtml_legend=1 00:20:28.965 --rc geninfo_all_blocks=1 00:20:28.965 --rc geninfo_unexecuted_blocks=1 00:20:28.965 00:20:28.965 ' 00:20:28.965 08:57:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:28.965 08:57:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:20:28.965 08:57:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:28.965 08:57:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.965 08:57:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.965 08:57:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.965 08:57:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.965 08:57:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.965 08:57:05 -- paths/export.sh@5 -- $ export PATH 00:20:28.965 08:57:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.965 08:57:05 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:28.965 08:57:05 -- common/autobuild_common.sh@486 -- $ date +%s 00:20:29.226 08:57:05 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728118625.XXXXXX 00:20:29.226 08:57:05 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728118625.kdw3gQ 00:20:29.227 08:57:05 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:20:29.227 08:57:05 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:20:29.227 08:57:05 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:29.227 08:57:05 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:29.227 08:57:05 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:29.227 08:57:05 -- common/autobuild_common.sh@502 -- $ get_config_params 00:20:29.227 08:57:05 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:20:29.227 08:57:05 -- common/autotest_common.sh@10 -- $ set +x 00:20:29.227 08:57:05 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:20:29.227 08:57:05 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:20:29.227 08:57:05 -- pm/common@17 -- $ local monitor 00:20:29.227 08:57:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:29.227 08:57:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:29.227 08:57:05 -- pm/common@25 -- $ sleep 1 00:20:29.227 08:57:05 -- pm/common@21 -- $ date +%s 00:20:29.227 08:57:05 -- pm/common@21 -- $ date +%s 00:20:29.227 08:57:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728118625 00:20:29.227 08:57:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728118625 00:20:29.227 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728118625_collect-vmstat.pm.log 00:20:29.227 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728118625_collect-cpu-load.pm.log 00:20:30.168 08:57:06 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:20:30.168 08:57:06 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:20:30.168 08:57:06 -- spdk/autopackage.sh@14 -- $ timing_finish 00:20:30.168 08:57:06 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:30.168 08:57:06 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:30.168 08:57:06 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:30.168 08:57:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:20:30.168 08:57:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:20:30.168 08:57:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:20:30.168 08:57:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:30.168 08:57:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:20:30.168 08:57:06 -- pm/common@44 -- $ pid=87844 00:20:30.169 08:57:06 -- pm/common@50 -- $ kill -TERM 87844 00:20:30.169 08:57:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:30.169 08:57:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:20:30.169 08:57:06 -- pm/common@44 -- $ pid=87845 00:20:30.169 08:57:06 -- pm/common@50 -- $ kill -TERM 87845 00:20:30.169 + [[ -n 5429 ]] 00:20:30.169 + sudo kill 5429 00:20:30.179 [Pipeline] } 00:20:30.201 [Pipeline] // timeout 00:20:30.207 [Pipeline] } 00:20:30.218 [Pipeline] // stage 00:20:30.225 [Pipeline] } 00:20:30.236 [Pipeline] // catchError 00:20:30.246 [Pipeline] stage 00:20:30.248 [Pipeline] { (Stop VM) 00:20:30.262 [Pipeline] sh 00:20:30.551 + vagrant halt 00:20:32.460 ==> default: Halting domain... 00:20:40.609 [Pipeline] sh 00:20:40.893 + vagrant destroy -f 00:20:43.487 ==> default: Removing domain... 00:20:43.501 [Pipeline] sh 00:20:43.786 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:43.796 [Pipeline] } 00:20:43.813 [Pipeline] // stage 00:20:43.820 [Pipeline] } 00:20:43.836 [Pipeline] // dir 00:20:43.843 [Pipeline] } 00:20:43.858 [Pipeline] // wrap 00:20:43.865 [Pipeline] } 00:20:43.881 [Pipeline] // catchError 00:20:43.890 [Pipeline] stage 00:20:43.893 [Pipeline] { (Epilogue) 00:20:43.906 [Pipeline] sh 00:20:44.191 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:48.405 [Pipeline] catchError 00:20:48.407 [Pipeline] { 00:20:48.421 [Pipeline] sh 00:20:48.707 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:48.707 Artifacts sizes are good 00:20:48.717 [Pipeline] } 00:20:48.731 [Pipeline] // catchError 00:20:48.743 [Pipeline] archiveArtifacts 00:20:48.750 Archiving artifacts 00:20:48.864 [Pipeline] cleanWs 00:20:48.877 [WS-CLEANUP] Deleting project workspace... 00:20:48.877 [WS-CLEANUP] Deferred wipeout is used... 00:20:48.884 [WS-CLEANUP] done 00:20:48.886 [Pipeline] } 00:20:48.902 [Pipeline] // stage 00:20:48.908 [Pipeline] } 00:20:48.923 [Pipeline] // node 00:20:48.928 [Pipeline] End of Pipeline 00:20:48.970 Finished: SUCCESS